OpenNMS alert when a folder is not empty? - opennms

I'm trying to create an OpenNMS alert when a certain folder ISN'T empty but can't seem to find a way of doing it. Any ideas?

I assume you have a service which goes down if your folder is empty. See the short video. By default notifications are turned off. Every service down event will be notified by default. You can be more granular by filtering on nodes and services. The default setting will send a mail to the admin user. You set a mail address in the user of the admin. To configure the access to your mail server, configure the javamail-configuration.properties. I just tried to figure out where you stuck exactly.

One approach could be to poll the certain directory for the empty condition with an agent on your host system and expose the status, e.g. Net-SNMP. You can create a service by using the SNMP Monitor to poll the status of the exposed OID and create a mail notification for this particular service.

Yes, this can be done. I have performed similar tasks using simple perl and bash scripts on Linux.
OpenNMS allows you to create polling configurations based on scripts. Your script is expected to output "0" or "1", with 0 representing "OK" and 1 representing "Not OK".
You could use the GeneralPurposePoller:
https://wiki.opennms.org/wiki/GeneralPurposePoller
However, it seems that you should instead use the SystemExecuteMonitor:
https://wiki.opennms.org/wiki/SystemExecuteMonitor

Related

I have a problem on sending Downstream to a gateway

I'm using a dragino DLOS8 gateway and a dragino end node lt-22222-l. I wrote a script to read and show the values in my end node's inputs but I couldn't control my relays. I found an example of a script (in a dragino article titled Communication with ABP End Node) showing this function to control them( it controlled the digital outputs but I changed it to relays) which is:
echo "${DEV_2},imme,hex,030101" > /var/iot/push/down
I even tried with more specified one:
echo "${DEV_1},imme,hex,030101,20,1,SF12,869525000,1" > /var/iot/push/down
in the article it indicates that I have to create a file in the directory /var/iot/push for downstream purpose. I tried using winscp and the command touch down but it deleted few seconds after. if there is anyone that used those devices or knows about this please help me.
I had a similar problem with Dragino, also with device logfiles in /var/iot/channels directory. Got information from Dragino support that those files are "consumed" by MQTT and TCP processes, so are periodically deleted: I undestand that LoRaWAN or MQTT or TCP application has to work with those files as soon as they are generated.
Notice that "imme" sends downstream immediately to C type devices, maybe "time" (downstream after receiving data from node) is better for your application.

How to get Validation error message into a Attribute from ValidateResult Processor of Nifi

I am trying to validate a json using ValidateRecord Processor via Avroschemaregistry. I need to store validation error message into a sql table, so i tried to capture the error message in attribute but i am unable to capture the error message in attribute, any idea how to do it
After your ValidateRecord Processor, you can choose to route flow files which are 'invalid' to a separte log and route them to your sql table, you can do the same if they 'fail'. I am assuming from the 'error message' you mean the 'bullentin' which would occur when the Processor can neither validate or invalidate the flow file based on your schema.
A potential solution to this is to use the SiteToSiteBulletinReportingTask
Screenshot of SiteToSiteBulletinReportingTask
You can build a dataflow to receive these bulletin events, manipulate them as you want and store them in a location of your choice for your auditing needs.
From the sounds of it, the SiteToSiteBulletinReportingTask should be able to achieve what you want. To implement this, add a iteToSiteBulletinReportingTask to the 'Reporting Tasks' in the NiFi Settings: Reporting Tasks in NiFi Settings
You can name your input port and have it flow towards your SQL store and you should have what you're after.
You need to allow NiFi nodes to receive data via site-to-site on the input port and you also need to grant the correct permissions on the root process group so the nodes are able to see the component, view and modify the data.
Side note: I would usually log everything, and have all failures and invalid route to log files, which I put to store, e.g. HBase/SQL. One suggestion I've seen is configure the logging subsystem to additionally send specific error categories to your destination of choice (e.g. active notification vs passive parsing of logs). NiFi is leveraging a very flexible logback system (an evolution of log4j). The best part - changes to the $NIFI_HOME/conf/logback.xml configuration file do not require an instance restart, will be picked up within 30 seconds or less.

How to show table like information in Nagios?

My setup is something like this.
I have multiple servers. Each server has multiple instances of same service (multi-tenant like architecture). Now I want to get status of all services running on all servers using SNMP.
The problem I am facing is, how can someone show table like information in Nagios?
i.e. when I click on any particular server, it will show me list of services. Now when I click on any service, it should again give me the list of instances of that particular service.
There's no such a feature in Nagios. You will need to set up a check for each of the services running on the monitored host.

In rsyslog, how can I trigger an hourly email aggregating entries in one of our logs?

One of our apps has been configured to log certain errors to a log on a remote server using rsyslog. I've been asked to provide an hourly email that lists the errors logged within the last hour. I've looked at ommail, but it doesn't seem to do exactly this. Any suggestions on how best to do this?
I would go low tech on this:
put error messages in a separate file like
*.error /var/log/error.log
then rotate it hourly via logrotate
From logrotate, you can run a script in the prerotate or postrotate part, where you can take the contents of the file and send them via Email.
ommail is more for sending logs matching a certain filter, so it would be hacky to make it send such "digests".

Open a JDBC connection in a specific AS400 subsystem

I have a web service that calls some stored procedure on a AS400 via JTOpen.
What I would like to do is that the connections used to call the stored procedures was opened in a specific subsystem with a specific user, instead of qusrwrk/quser as now (default).
I think I can be able to clone the qusrwrk subsystem to make it start with a specific user, but what I cannot figure out is the mechanism to open the connection in the specific subsystem.
I guess there should be a property at connection level to say subsystem=MySubsystem.
But unfortunatly I haven't found that property.
Any hint would be appreciated.
Flavio
Let the system take care of the subsystem the job database server job is started in.
You should just focus on the application (which is what IBM i excels in).
If need be, you can tweak subsystem parameters for QUSRWRK to improve performance by allocating memory, etc.
The system uses a pool of prestarted jobs as described in the FAQ: When I do WRKACTJOB, why is the host server job running under QUSER instead of the profile specified on the AS400 object?
To improve performance, the host server jobs are prestarted jobs running under QUSER. When the Toolbox connects to a host server job in order to perform an API call, run a command, etc, a request is sent from the Toolbox to an available prestarted job. This request includes the user profile specified on the AS400 object that represents the connection. The host server job receives the request and swaps to the specified user profile before it runs the request. The host server itself originally runs under the QUSER profile, so output from the WRKACTJOB command will show the job as being owned by QUSER. However, the job is in fact running under the profile specified on the request. To determine what profile is being used for any given host server job, you can do one of three things:
1. Display the job log for that job and find the message indicating which user profile is used as a result of the swap.
2. Work with the job and display job status attributes to view the current user profile.
3. Use Navigator for i to view all of the server jobs, which will list the current user of each job. You can also use Navigator for i to look at the server jobs being used by a particular user.

Resources