HI I have Elasticsearch cluster.
I created an alert and action (say alert X)
I am willing to create a watcher that may use trigger against alert x, how can I achieve that.
I can write a trigger from scratch but i wish to use the already created alert as a condition to trigger an action.
Related
I am trying to run a job including a task that needs to multiple run in parallel using different parameter values.
I understand that this is possible based on this post:
https://docs.databricks.com/data-engineering/jobs/jobs.html#maximum-concurrent-runs
But I can't figure out how.
To create trigger on multiple jobs using Databricks UI, follow below path
Workflows > Jobs > Create
Here give Task name and select Type, Source and Path.
You can Add parameters as shown in the screenshot below.
In Advanced options you can Add dependent libraries, Edit email notifications, Edit retry policy, Edit timeout.
I am working on a design where I need to move files from one Storage account to another storage account. And after let's say a week, delete those files.
One file is going to successfully move I can either either send a message to Event Hub or Write a record into SQL DB
For Deletion of files I have two approach.
I have two approach:
Polling
Poll daily for SQL DB entry and then check the last modified timestamp and delete it.
Update the SQL DB entry for the file and reflect that file is deleted.
Event Based
Send a message to event grid as soon as the file is deleted.
However, I am not able to figure out how to wait for 1 week before I delete a file. If I had to delete file immediately I can do upon receiving message.
Have you considered using Service Bus queues with schedule feature? Service Bus queues/topics may be a better fit for delayed processing requirement.
https://learn.microsoft.com/en-us/azure/service-bus-messaging/message-sequencing#scheduled-messages
Problem: Set up a workflow in CRM Dynamics 365 Sales that starts when the value of a specific field changes. But it turned out that the process does not start if changes are made to old CRM records (which were created before starting the process itself).
Question: Is there any method how can I make CRM start the process even for old records? I am sorry that everything is in Russian. I work in this version.
The process works correctly when creating a record and when editing a field in a new record. And when editing a field in an existing record, the process does not start
To make that Workflow to trigger on all records, make the scope as “Organization” instead of “User” - it should work as intended. Read more
It’s not about when it is created, probably those records are owned by somebody else. That’s why user scoped WF is not triggering at all.
Normally workflows trigger any future relevant changes. Otherwise, we should create a scenario where we cause that trigger. A couple of options,
As you have already set the workflow to run on a specific field change, you can make an update to that field and save the record which should trigger the workflow. If it's a very less number of records it's possible otherwise it's not a good idea to manually update all these records. If you don't want to do this manually you can update the records using any other option like a console app which makes updates to all the records (this would be faster assuming this is a one-time activity you have to do.)
Make this workflow on demand and trigger the same manually for all the records you want to run the workflow. Again this is a manual process but cleaner than the first one.
You do not need to do any manual update. The workflow you creates should be enough to kick in.
make sure your workflow has trigger on change of field. Screenshot for reference. It does not matter when the Workflow is created. As long as it satisfies condition it will kick in.
I am trying to validate a json using ValidateRecord Processor via Avroschemaregistry. I need to store validation error message into a sql table, so i tried to capture the error message in attribute but i am unable to capture the error message in attribute, any idea how to do it
After your ValidateRecord Processor, you can choose to route flow files which are 'invalid' to a separte log and route them to your sql table, you can do the same if they 'fail'. I am assuming from the 'error message' you mean the 'bullentin' which would occur when the Processor can neither validate or invalidate the flow file based on your schema.
A potential solution to this is to use the SiteToSiteBulletinReportingTask
Screenshot of SiteToSiteBulletinReportingTask
You can build a dataflow to receive these bulletin events, manipulate them as you want and store them in a location of your choice for your auditing needs.
From the sounds of it, the SiteToSiteBulletinReportingTask should be able to achieve what you want. To implement this, add a iteToSiteBulletinReportingTask to the 'Reporting Tasks' in the NiFi Settings: Reporting Tasks in NiFi Settings
You can name your input port and have it flow towards your SQL store and you should have what you're after.
You need to allow NiFi nodes to receive data via site-to-site on the input port and you also need to grant the correct permissions on the root process group so the nodes are able to see the component, view and modify the data.
Side note: I would usually log everything, and have all failures and invalid route to log files, which I put to store, e.g. HBase/SQL. One suggestion I've seen is configure the logging subsystem to additionally send specific error categories to your destination of choice (e.g. active notification vs passive parsing of logs). NiFi is leveraging a very flexible logback system (an evolution of log4j). The best part - changes to the $NIFI_HOME/conf/logback.xml configuration file do not require an instance restart, will be picked up within 30 seconds or less.
I'm trying to create an OpenNMS alert when a certain folder ISN'T empty but can't seem to find a way of doing it. Any ideas?
I assume you have a service which goes down if your folder is empty. See the short video. By default notifications are turned off. Every service down event will be notified by default. You can be more granular by filtering on nodes and services. The default setting will send a mail to the admin user. You set a mail address in the user of the admin. To configure the access to your mail server, configure the javamail-configuration.properties. I just tried to figure out where you stuck exactly.
One approach could be to poll the certain directory for the empty condition with an agent on your host system and expose the status, e.g. Net-SNMP. You can create a service by using the SNMP Monitor to poll the status of the exposed OID and create a mail notification for this particular service.
Yes, this can be done. I have performed similar tasks using simple perl and bash scripts on Linux.
OpenNMS allows you to create polling configurations based on scripts. Your script is expected to output "0" or "1", with 0 representing "OK" and 1 representing "Not OK".
You could use the GeneralPurposePoller:
https://wiki.opennms.org/wiki/GeneralPurposePoller
However, it seems that you should instead use the SystemExecuteMonitor:
https://wiki.opennms.org/wiki/SystemExecuteMonitor