Creating master agent and sub agent using snmp4j-agentx - snmp

I am doing internship in a company. I have been asked to find a way to implement snmp agents in one of their systems using java. I tried to find a free MIB compiler for java but failed. Hence I am trying to do that using SNMP4J-agentX library (because they specifically said they need master and sub agents). I have never worked with it before so I am having difficulty trying to implement it. There is a sample master agent and sub agent in the library package. I dont understand how i can modify it to include variables from my own mib file. Or if anyone have a simpler sample master agent or sub agent code, it would be very helpful if you can share it. I am only a little familiar with the internal working of an agent. So please if anyone can help please do so from a relatively basic level.
Many thanx in advance.

Adding your own MIB objects (so called ManagedObjects) to your SNMP agent with SNMP4J-AgentX works in the same way as for SNMP4J-Agent. The only exception from this are AgentX shared tables, but these are advanced concepts you generally do not need when you start using AgentX/SNMP.
Thus, I recommend reading the SNMP4J-Agent-Instrumentation-Guide.pdf to learn how MIB objects are registered and instrumentated based on your requirements.

Related

Fuchsia: how to use a built-in capability in a component

I'm trying to learn and use Fuchsia for fun, and a pretty basic concept is keeping me from progressing.
I thought that, as a learning experience, I could write a simple HTTP client that prints the content of some random URL to the log. Really nothing fancy.
As I understand, using the network (in my case I'd like to utilize fuchsia.net.http.Loader) is a capability, which has to be granted to a running component. Makes sense, that's pretty much the core of the OS.
I also understand that the initiating component, the one that runs my component, needs to grant this capability to my component. That's fair.
What I don't understand, and I'd very much appreciate any additional information (pretty please!) is how I can grant this to my component?
Specifically all demos and examples I saw had a custom client & server under a realm, which talked to each other. That's a good practice, but it doesn't bring in any capability that's built in.
What am I missing? Thanks in advance!
I'm trying to learn and use Fuchsia for fun, and a pretty basic concept is keeping me from progressing.
Thanks for your interest in Fuchsia! First of all, if you haven't already gone through Fuchsia Fundamentals I would strongly suggest that as a starting point for many of the foundational concepts.
Specifically all demos and examples I saw had a custom client & server under a realm, which talked to each other. That's a good practice, but it doesn't bring in any capability that's built in.
This is primarily because there's isn't necessarily a concept of any set of components or capabilities being "built in" to the system. The capabilities available to components in the system are entirely dependent on the rest of the components in a particular product build and how they are organized (this is called the component topology).
I thought that, as a learning experience, I could write a simple HTTP client that prints the content of some random URL to the log. Really nothing fancy.
The answer has a few sharp edges to it at the moment, as Fuchsia is a rapidly evolving open source project. Hopefully some of the details below will help you move forward.
Determine the capability routes
So you'll have to do a bit of work to figure out where the capability you need is provided and routed. In fact, one of the components exercises shows you how to do this for the fuchsia.net.http.Loader capability. Knowing where a capability is offered/used allows you to determine where your component would need to be instantiated to obtain the necessary capability.
You might also find some of the content in the Connect components developer guide useful in accessing the capability.
Run the component
Knowing where a capability is routed allows you to determine how to run your component. The most straightforward way of instantiating a component in the topology is to do so dynamically using ffx component. However, this requires a collection somewhere on the system with the capabilities you need. The ffx-laboratory realm where most examples are run has a very limited set of capabilities that does not include fuchsia.net.http.Loader.
You'll likely need to add your component statically to the topology using a core realm shard so that the necessary routes can be declared explicitly between the components that offer fuchsia.net.http.Loader and your component. With the component included statically in your product build, you can execute it using ffx component commands.
For more details on component execution, check out the Run components developer guide as well.
Run a CLI binary
Since this is a learning exercise, another option is to build your code as a binary that runs within the context of a component that already has the capabilities you need vs. creating and running an entirely new component. This is commonly used for CLI tools. With the ffx component explore command you can run your code as a binary inside the existing component that provides the HTTP capability you are looking for using the --tools argument, without the need to work through all the capability routing pieces described above.
For more details on ffx component explore, see Explore components.

Is it possible for NiFi to connect to itself with a Remote Process Group?

I'm working on a project that heavily uses Apache NiFi v1.10.0. I'm getting tired of clicking through hundreds of process groups to apply small fixes that are essentially the same.
I've recently discovered Remote Process Groups and I was wondering if there is a way to connect NiFi instance to itself and implement DRY this way? I was thinking of implementing repeating components inside the root component and accessing them with remote access inside other process groups. Is this possible?
Right now I'm getting only SSLHandshakeException / PKIX path building failed
If there are other ways to implement DRY - please tell me.
#Alex. I feel your pain, in a previous role, they had a process group of 100s of flows, and would copy and paste the entire main group, turning into 1000s of flows. All copies with small modifications in random places.
Although I am advocate of programming this way to get a POC operational, I am a huge advocate of evaluating how to make the flows dynamic from the highest level. The process I use to do this is to go through DFDLC and versioning flow until, for example, I have 1 process group that can replace 2 by cleaning up the flow design differences between each other. We consider this part of optimizing the flow to reduce the total number of active processors too.
I highly recommend you do NOT use remote process groups within the same cluster. I also recommend you make common flows on the main canvas, and connect them with input/output ports when you need to move from a deeper process group back up to the main canvas. You will end up with a flow like this:
You can definitely do site-to-site to self, however it will be less performant because now you are taking local flow files and transferring them over a network connection to all the nodes in the cluster, even though some of them will go back to the same node they are on.
You could use NiFi Registry and created versioned flows that contain reusable functionality. Then you make the change once, commit it back to NiFi Registry, and then update the other instances of that versioned flow.
Bryan and Steven both offered good solutions. I will address the PKIX path building error you are encountering -- this indicates that NiFi is attempting to make an HTTPS connection to another service (likely in this case itself), and doesn't know how to verify the presented public certificate. The solution is to reference an SSLContextService configured with a truststore that contains the certificate. The Apache NiFi walkthroughs provide step-by-step instructions for performing these tasks.

Difference between Apache NiFi and StreamSets

I am planning to do a class project and was going through few technologies where I can automate or set the flow of data between systems and found that there are couple of them i.e. Apache NiFi and StreamSets ( to my knowledge ). What I couldn't understand is the difference between them and use-cases where they can be used? I am new to this and if anyone can explain me a bit would be highly appreciated. Thanks
Suraj,
Great question.
My response is as a member of the open source Apache NiFi project management committee and as someone who is passionate about the dataflow management domain.
I've been involved in the NiFi project since it was started in 2006. My knowledge of Streamsets is relatively limited so I'll let them speak for it as they have.
The key thing to understand is that NiFi was built to do one really important thing really well and that is 'Dataflow Management'. It's design is based on a concept called Flow Based Programming which you may want to read about and reference for your project 'https://en.wikipedia.org/wiki/Flow-based_programming'
There are already many systems which produce data such as sensors and others. There are many systems which focus on data processing like Apache Storm, Spark, Flink, and others. And finally there are many systems which store data like HDFS, relational databases, and so on. NiFi purely focuses on the task of connecting those systems and providing the user experience and core functions necessary to do that well.
What are some of those key functions and design choices made to make that effective:
1) Interactive command and control
The job of someone trying to connect systems is to be able to rapidly and efficiently interact with the constant streams of data they see. NiFi's UI allows you do just that as the data is flowing you can add features to operate on it, fork off copies of data to try new approaches, adjust current settings, see recent and historical stats, helpful in-line documentation and more. Almost all other systems by comparison have a model that is design and deploy oriented meaning you make a series of changes and then deploy them. That model is fine and can be intuitive but for the dataflow management job it means you don't get the interactive change by change feedback that is so vital to quickly build new flows or to safely and efficiently correct or improve handling of existing data streams.
2) Data Provenance
A very unique capability of NiFi is its ability to generate fine grained and powerful traceability details for where your data comes from, what is done to it, where its sent and when it is done in the flow. This is essential to effective dataflow management for a number of reasons but for someone in the early exploration phases and working a project the most important thing this gives you is awesome debugging flexibility. You can setup your flows and let things run and then use provenance to actually prove that it did exactly what you wanted. If something didn't happen as you expected you can fix the flow and replay the object then repeat. Really helpful.
3) Purpose built data repositories
NiFi's out of the box experience offers very powerful performance even on really modest hardware or virtual environments. This is because of the flowfile and content repository design which gives us the high performance but transactional semantics we want as data works its way through the flow. The flowfile repository is a simple write ahead log implementation and the content repository provides an immutable versioned content store. That in turn means we can 'copy' data by only ever adding a new pointer (not actually copying bytes) or we can transform data by simply reading from the original and writing out a new version. Again very efficient. Couple that with the provenance stuff I mentioned a moment ago and it just provides a really powerful platform. Another really key thing to understand here is that in the business of connecting systems you don't always get to dictate things like size of data involved. The NiFi API was built to honor that fact and so our API lets processors do things like receive, transform, and send data without ever having to load the full objects in memory. These repositories also mean that in most flows the majority of processors do not even touch the content at all. However, you can easily see from the NiFi UI precisely how many bytes are actually being read or written so again you get really helpful information in establishing and observing your flows. This design also means NiFi can support back-pressure and pressure-release naturally and these are really critical features for a dataflow management system.
It was mentioned previously by the folks from the Streamsets company that NiFi is file oriented. I'm not really sure what the difference is between a file or a record or a tuple or an object or a message in generic terms but the reality is when data is in the flow then it is 'a thing that needs to be managed and delivered'. That is what NiFi does. Whether you have lots of really high speed tiny things or you have large things and whether they came from a live audio stream off the Internet or they come from a file sitting on your harddrive it doesn't matter. Once it is in the flow it is time to manage and deliver it. That is what NiFi does.
It was also mentioned by the Streamsets company that NiFi is schemaless. It is accurate that NiFi does not force conversion of data from whatever it is originally to some special NiFi format nor do we have to reconvert it back to some format for follow-on delivery. It would be pretty unfortunate if we did that because what this means is that even the most trivial of cases would have problematic performance implications and luckily NiFi does not have that problem. Further had we gone that route then it would mean handling diverse datasets like media (images, video, audio, and more) would be difficult but we're on the right track and NiFi is used for things like that all the time.
Finally, as you continue with your project and if you find there are things you'd like to see improved or that you'd like to contribute code we'd love to have your help. From https://nifi.apache.org you can quickly find information on how to file tickets, submit patches, email the mailing list, and more.
Here are a couple of fun recent NiFi projects to checkout:
https://www.linkedin.com/pulse/nifi-ocr-using-apache-read-childrens-books-jeremy-dyer
https://twitter.com/KayLerch/status/721455415456882689
Good luck on the class project! If you have any questions the users#nifi.apache.org mailing list would love to help.
Thanks
Joe
Both Apache NiFi and StreamSets Data Collector are Apache-licensed open source tools.
Hortonworks does have a commercially supported variant called Hortonworks DataFlow (HDF).
While both have a lot of similarities such as a web-based ui, both are used for ingesting data there are a few key differences. They also both consist of a processors linked together to perform transformations, serialization, etc.
NiFi processors are file-oriented and schemaless. This means that a piece of data is represented by a FlowFile (this could be an actual file on disk, or some blob of data acquired elsewhere). Each processor is responsible for understanding the content of the data in order to operate on it. Thus if one processor understands format A and another only understands format B, you may need to perform a data format conversion in between those two processors.
NiFi can be run standalone, or as a cluster using its own built-in clustering system.
StreamSets Data Collector (SDC) however, takes a record based approach. What this means is that as data enters your pipeline it (whether its JSON, CSV, etc) it is parsed into a common format so that the responsibility of understanding the data format is no longer placed on each individual processor and any processor can be connected to any other processor.
SDC also runs standalone, and also a clustered mode, but it runs atop Spark on YARN/Mesos instead, leveraging existing cluster resources you may have.
NiFi has been around for about the last 10 years (but less than 2 years in the open source community).
StreamSets was released to the open source community a little bit later in 2015. It is vendor agnostic, and as far as Hadoop goes Hortonworks, Cloudera, and MapR are all supported.
Full Disclosure: I am an engineer who works on StreamSets.
They are very similar for data ingest scenarios.
Apache NIFI(HDP) is more mature and StreamSets is more lightweight.
Both are easy to use, both have strong capability. And StreamSets could easily
They have companies behind, Hortonworks and Cloudera.
Obviously there are more contributors working on NIFI than StreamSets, of course, NIFI have more enterprise deployments in production.
Two of the key differentiators between the two IMHO are.
Apache NiFi is a Top Level Apache project, meaning it has gone through the incubation process described here, http://incubator.apache.org/policy/process.html, and can accept contributions from developers around the world who follow the standard Apache process which ensures software quality. StreamSets, is Apache LICENSED, meaning anyone can reuse the code, etc. But the project is not managed as an Apache project. In fact, in order to even contribute to Streamsets, you are REQUIRED to sign a contract. https://streamsets.com/contributing/ . Contrast this with the Apache NiFi contributor guide, which wasn't written by a lawyer. https://cwiki.apache.org/confluence/display/NIFI/Contributor+Guide#ContributorGuide-HowtocontributetoApacheNiFi
StreamSets "runs atop Spark on YARN/Mesos instead, leveraging existing cluster resources you may have." which imposes a bit of restriction if you want to deploy your dataflows further toward the Edge where the Devices that are generating the data live. Apache MiniFi, a sub-project of NiFi can run on a single Raspberry Pi, while I am fairly confident that StreamSets cannot, as YARN or Mesos require more resources than a Raspberry Pi provides.
Disclosure: I am a Hortonworks employee

Where to begin with SNMP agent implementation?

before I start I realise there are a few SNMP related questions here already but not many seem to have been answered - that could mean I'm asking in the wrong place but I don't know where else to go at the moment.
I've been reading up as best I can on SNMP for a couple of days but am finding it difficult to get my head around what is meant to be happening. The idea is eventually we will integrate SNMP into our Java application server which will allow the end users to incorporate it into their pre-existing Network Management Systems(NMS).
Unfortunately I'm feeling entirely confused by what is meant to be going on. From what I understood from talking to the end users (which was unfortunately before any research) was that the monitoring allows their existing NMS to give their admin guys a view of the vital statistics in a tree type display, giving them feedback regarding different parts of the system at a high level and allowing them to dig down into specific subsystems.
From reading around we would implement an 'Agent' which has several defined interfaces allowing for GET requests etc to be processed and responded to. That makes sense but I am at a loss to work out what the format of the communication is - there don't seem to be any specific examples of what any of the messages look like, how the information is encoded.
More of my confusion though is regarding Management Information Base(MIB). I had, wrongly, assumed that the interface of the agent would allow for the monitored attributes to be requested and then in turn the values for those attributes requested. Allowing any new Agent to be started and detected without any configuration on the NMS end (with the exception of authentication in v3). This, if I understand correctly, is not the case and the Agent must instead define MIBs which can be used by the NMS to determine those attributes. My confusion is increased when people start referring to thousands of existing MIBs and that they can be reused which I don't understand. Is the intention that a single MIB definition can be used to say describe how a particular attribute of a network device (something simple like internet connected on a router:yes/no) for many different devices? If so I don't believe that our software would allow the monitoring of anything common to any other device/system but should we be looking for already exising MIBs? At the moment I don't really see any good rational for such a system, surely it would be easier for the Agent to export that information - so I'd appreciate it if someone could enlighten me!
I think it would help if I was able to setup a simple SNMP agent and some sort of client, I could begin to see the process and eventually inspect the communication between the two but am finding it difficult to find anywhere that provides any information on doing such a thing. Nagios has been recommended to us as a test 'client'/NMS but their 'get started quick' section recommends downloading a 600Mb virtual machine - surely there is a quicker way to get started?
Any help or suggestions will be appreciated, I have been through the Wiki page but it doesn't seem to go into much detail about the MIBs and the having not had to deal with anything like the referenced RFCs before, while they may contain all of the information they seem completely impenetrable to me at the moment. Or if there are any books that can be recommended for an overview and implementation of v3?
Thanks for reading and even more thanks if you think you can help!
It seems to me that you read all SNMP information piece by piece in an disorganized way. This is highly not recommended and of course lead you to confusion.
What about forgetting what you have learnt so far and dive into a good book such as Essential SNMP?
http://shop.oreilly.com/product/9780596008406.do
Click the Google Preview icon to preview it please.
You could not depend on a network forum to tell you the ABCs, as that's impractical I find out.
The communications interface is SNMP. That's the protocol used for transmission (usually on top of UDP). The thing that services information requests is an SNMP Agent. The thing that sends information requests is an SNMP Manager.
The definition of what information should be made available by the Agent, and requested by the Manager, goes in a MIB. A MIB is the "glue", a directory of what sort of things any particular system can/should offer. It maps numeric codes to names and types that allow us to make sense of the data, much like how a phone directory maps phone numbers to people's names and addresses.
Generally you would create and ship and use your own MIBs that can describe aspects specific to your own product, but you are supposed to service some standard information requests as well, which are defined in existing MIBs. Yes there are thousands of other pre-existing MIBs and the likelihood that you need more than one or two of these is remote. They are typically published versions of MIBs for existing products.
The conventional way to "toy around" is to install Net-SNMP (a software suite that includes an agent implementation and allows you to "bolt on" your own logic and your own MIBs fairly easily) then examine the results using a packet capturer like Wireshark.
For a fuller implementation in production you may stick with Net-SNMP, or write your own Agent software, or do what I did and create a hybrid of the two that's a little more flexible and performant but uses Net-SNMP's backend for handling all the low-level SNMP stuff.
Your first step, though, is to read a book or some other teaching material that can clear all your misconceptions, because guesswork won't cut it.
I had success using the samples from this page. Both the shell and Perl NetSNMP code was very straightforward to implement and query.

Which workflow engine should I chose for implementing a dynamic reconfiguration of workflows?

I want to be able to interrupt a running workflow instance, say when a new activity is about to be invoked, and extract information both about the structure of the workflow and the data in the particular instance. Then I will consult with an external system and according to its response I will possibly alter the behaviour of the workflow. The options I would like to have are addition/removal of activities and altering parameters for the activities to be invoked.
I am currently struggling with the engine it's best to go with. I have looked at WWF, Apache ODE, Oracle Workflow and Active BPEL and as far as I understand they can all provide me with the options I need. I would really appreciate any recommendations on which one will be the easiest to work with for my purpose and any restrictions either of the above might have that would prevent me from reaching my goal.
Thanks
I am sorry not to directly answer your question, but you may be interested in a state machine framework called Stateless created by Nicholas Blumhardt (AutoFac). I have used this instead of Windows Workflow where I needed to quickly configure my steps for a work flow. I have one configuration file that I alter and can introduce new steps into the workflow quite easily. See my SO answer here for more details.
Essentially you define a state as State<T> and this allows you to persist your state in a database easily.

Resources