Objective:- Looking for an option to use SNMP trap receiving mechanism as a custom processor in Apache Nifi.
Looked at references like how to create a custom processor in Nifi (Apache docs and youtube videos),
Nifi Source code of GetSNMP (including AbstractSNMPProcessor), ListenSysLog, GetFile etc.
Have got a simple (java code) SNMPTrap receiver using SNMP4J library and can listen to specific address with UDP or TCP based port. And then on occurence of a Trap (can be simulated by simple Java code of SNMP4J library) can print PDU details.
Now while trying to write this code within Nifi as a custom processor, not sure where to put my listen mechanism and then processing the actual PDU part.
Kind of struck here.
More details :-
GetSNMP processor talks about a Specific OID and using that we have options of GET or Walk strategies to get info. Here (for my objective) looking for an option where Nifi server is running and want to get SNMP traps picked from specific system where Nifi is running. And for my objective, didn't get context to extend GetSNMP code.
In ListenSyslogProcessor, there was a blocking queue mechanism. And from it I could not derive to go for listening at an ip address for traps and how exactly to use ProcessContext, ProcessSession option of onTrigger method for a Nifi custom processor.
Any inputs are welcome...
One Possible Solution :-
Could not create a Custom processor in Apache Nifi to Receive SNMP traps. Something else was possible, I will illustrate that here. It may be of some help to others.
No change in the Java code
Used ExecuteProcess Processor of Nifi to run the above Java code.
On success result is pushed to further components in the Nifi Flow.
Actually tried to catch SNMP traps using linux commands like net-snmp, snmptrapd. These options were not actually catching the SNMP traps, tested thru Java code trap sender and then Traps generated using commands from online sources.
If you have better ideas, please share.
Related
I am trying to work with FreeSwitch using Event Socket library, and a bit surprised it has no abstraction like Session in internal scripting languages (which can be established, bridged etc using simple API). Is this the case and I have correct understanding?
If I understand well ESL allows to send API commands like originate and receive events and it's up to application to understand the state by processing events, so there are no helpers for this, correct?
So even if
Scripts using the Event Socket Library (ESL) can be run from anywhere
achieving the same results as built-in languages
it's up to application developer to implement Session abstraction on his side when using ESL, so ESL is low level interface and much more efforts are necessary to, i.g. establish call with originate, get it's state (by processing events) and then bridge it i.g. with uuid_transfer?
We are using value from "Unique-ID" header field.
The implementation is using the following library with custom addons - https://github.com/esl-client/esl-client.
PS. for LUA: How to get value of SIP header in Freeswitch?
We have multiple team nifi applications running in same nifi machine... Is there any way to log the logs specific to my application? Also by default nifi-app.log file is difficult to track the issues and bulletin board shows the error msg for only 5 mins... How to get the errors captured and send an mail alert in Nifi?
Please help me to get through this. Thanks in advance!
There are a couple ways to approach this. One is to route failure relationships from processors to a PutEmail processor which can send an alert on errors. Another is to use a custom reporting task to alert a monitoring service when a certain number of flowfiles are in an error queue.
Finally, we have heard that in multitenant environments, log parsing is difficult. While NiFi aims to reduce or completely eliminate the need to visually inspect logs by providing the data provenance feature, in the event you do need to inspect the logs, we recommend searching the log by processor ID to isolate relevant messages. You can also use NiFi itself to ingest those same logs and perform parsing and filtering activities if desired. Future versions may improve this experience.
By parsing the nifi log, you can separate the logs which is specific to your team applications, by using the processor group id and using Nifi Rest API. Check the below link for the nifi template and python codes to solve this issue:
https://link.medium.com/L6IY1wTimV
You can send all the errors in a Processor Group to the same processor, it could be a regular UpdateAttribute or a custom processor, this processor is going to add the path and all the relevant information and then send this to a general error/logs flow that is going to check the information inside the flowfile regarding to the error, and will make the decision of send or not an email, to whom and this kind of things.
Using this approach, the system keeps simple and inside NiFi, so you don't add more layers of complexity, and you are going to have only one processor to manage the errors per Process Group.
This is the way we are managing errors in my company.
I have following two scenario and for each one I need recommendation as to which NiFi processor to use:
I have Restful web services running outside NiFi. NiFi would like to get/post/delete/update some data by calling specific restful API. Once the Restful API receives request from NiFi it sends back the response to NiFi. Which NiFi processor to use here?
In 2nd scenario, I have an application running outside NiFi. This application has its own GUI. The user need some information so he want to send request to NiFi. In NiFi, is there any processor which accepts request from application, process the request, and sends response back?
I actually read all the question with getHTTP and invokeHTTP.
I have initially tried with invokeHTTP processor. I tried both get and post call using invokeHTTP. But I don't see any response from Restful API running outside NiFi.
I did not try getHTTP.
I am using NiFi. NiFi do not have code.
I expect NiFi should be able to call Restful API running outside. I expect NiFi should accept request coming from application running outside and process that request.
Yep, NiFi comes bundled with processors that satisfy both of your requirements.
For scenario #1, you can use either a combination of GetHTTP/PostHTTP which as their name implies are HTTP clients that make GET and POST calls respectively. However, later the community came up with InvokeHTTP that offers more features like support for NiFi Expression Language, support for incoming flowfiles, etc.,
For scenario #2, you can either use ListenHTTP or the combination of HandleHttpRequest/HandleHttpResponse. The later literally offers you have a more robust web-service implementation while the former is a simple web-hook kind. I haven't worked much with ListenHTTP so probably can't comment more on that.
Having said that, for your second scenario, if your objective is to consume NiFi statistics, you can directly hit NiFi's rest api, rather than having a separate NiFi flow with web service capability.
Useful Links
https://pierrevillard.com/2016/03/13/get-data-from-dropbox-using-apache-nifi/
https://dzone.com/articles/using-websockets-with-apache-nifi
https://ddewaele.github.io/http-communication-with-apache-nifi/
I am thinking of using Spring State Machine for a TCP client. The protocol itself is given and based on proprietary TCP messages with message id and length field. The client sets up a TCP connection to the server, sends a message and always waits for the response before sending the next message. In each state, only certain responses are allowed. Multiple clients must run in parallel.
Now I have the following questions related to Spring State machine.
1) During the initial transition from disconnected to connected the client sets up a connection via java.net.Socket. How can I make this socket (or the DataOutputStream and BufferedReader objects got from the socket) available to the actions of the other transitions?
In this sense, the socket would be some kind of global resource of the state machine. The only way I have seen so far would be to put it in the message headers. But this does not look very natural.
2) Which runtime environment do I need for Spring State Machine?
Is a JVM enough or do I need Tomcat?
Is it thread-safe?
Thanks, Wolfgang
There's nothing wrong using event headers but those are not really global resources as header exists only for duration of a event processing. I'd try to add needed objects into an machine's extended state which is then available for all actions.
You need just JVM. On default machine execution is synchronous so there should not be any threading issues. Docs have notes if you want to replace underlying executor asynchronous(this is usually done if multiple concurrent regions are used).
Does any body have some info, links, pointer on how is cross process Eventbus communication is occurring. Per documentation I am concluding that multiple Vert.x (thus separate JVM processes) could be clustered on and communicate via Eventbus. However, there are little to none documentation on how to achieve it.
Looking into DOCs, I can see that publish/registerHandler methods take address as a String what works within a process, but I can not wrap my head around on how it works cross processes and how to register and publish to address, does it work over HTTP , TCP ? From API perspective do I need to pass port and process signature ?
Cross process communication happens via the EventBus. Multiple vertx instances can be started up and clustered to allow separate instances on the same or other machines to communicate. The low level clustering is handled by Hazelcast.The configuration is handled by the cluster.xml file in the conf folder of your vertx install. You can learn more about the format of the file by looking at the Hazelcast Docs. It is transparent to your handers and works over TCP.
You can test it by running two or more instances on your local machine once they are started with the -cluster flag. Look at the example being run, and the config changes required in How to use eventbus messaging in vertx?