Azure Event grid - Delayed Execution - azure-eventgrid

I am working on a design where I need to move files from one Storage account to another storage account. And after let's say a week, delete those files.
One file is going to successfully move I can either either send a message to Event Hub or Write a record into SQL DB
For Deletion of files I have two approach.
I have two approach:
Polling
Poll daily for SQL DB entry and then check the last modified timestamp and delete it.
Update the SQL DB entry for the file and reflect that file is deleted.
Event Based
Send a message to event grid as soon as the file is deleted.
However, I am not able to figure out how to wait for 1 week before I delete a file. If I had to delete file immediately I can do upon receiving message.

Have you considered using Service Bus queues with schedule feature? Service Bus queues/topics may be a better fit for delayed processing requirement.
https://learn.microsoft.com/en-us/azure/service-bus-messaging/message-sequencing#scheduled-messages

Related

How to recover from "Proposed Flow does not contain a Connection with ID xxx but this instance has data queued in that connection"?

One of my nifi nodes/instances is refusing to reconnect to the cluster
Proposed flow is not inheritable by the flow controller and cannot completely replace the current flow due to: Proposed Flow does not contain a Connection with ID 4d2c4e9d-0176-1000-0000-0000310c611f but this instance has data queued in that connection, updateId=307]
Without entering in why this happened, how can I recover from this error? Even if I overwrite the flow.xml.gz file it will refuse to accept it because it knows that there is data queued for that connection.
Can I flush / delete that data somehow?
I had tried to delete/move
flow.xml.gz
flowfile_repository
content_repository
database_repository
But I get the same error on startup, where does Nifi track that connection 4d2c4e9d-0176-1000-0000-0000310c611f had data in this nifi node?
Deleting (back it up first) the flow.xml.gz file should fix it.
Make sure that you are actually moving/deleting the right flox.xml.gz file since it may not be in the default location.
So check the actual location of the flow file at $NIFI_HOME/conf/nifi.properties , look for nifi.flow.configuration.file. Then delete that one (backup first) and the node should be able to reconnect.

CRM Dynamics workflow not triggering for all records

Problem: Set up a workflow in CRM Dynamics 365 Sales that starts when the value of a specific field changes. But it turned out that the process does not start if changes are made to old CRM records (which were created before starting the process itself).
Question: Is there any method how can I make CRM start the process even for old records? I am sorry that everything is in Russian. I work in this version.
The process works correctly when creating a record and when editing a field in a new record. And when editing a field in an existing record, the process does not start
To make that Workflow to trigger on all records, make the scope as “Organization” instead of “User” - it should work as intended. Read more
It’s not about when it is created, probably those records are owned by somebody else. That’s why user scoped WF is not triggering at all.
Normally workflows trigger any future relevant changes. Otherwise, we should create a scenario where we cause that trigger. A couple of options,
As you have already set the workflow to run on a specific field change, you can make an update to that field and save the record which should trigger the workflow. If it's a very less number of records it's possible otherwise it's not a good idea to manually update all these records. If you don't want to do this manually you can update the records using any other option like a console app which makes updates to all the records (this would be faster assuming this is a one-time activity you have to do.)
Make this workflow on demand and trigger the same manually for all the records you want to run the workflow. Again this is a manual process but cleaner than the first one.
You do not need to do any manual update. The workflow you creates should be enough to kick in.
make sure your workflow has trigger on change of field. Screenshot for reference. It does not matter when the Workflow is created. As long as it satisfies condition it will kick in.

How to get Validation error message into a Attribute from ValidateResult Processor of Nifi

I am trying to validate a json using ValidateRecord Processor via Avroschemaregistry. I need to store validation error message into a sql table, so i tried to capture the error message in attribute but i am unable to capture the error message in attribute, any idea how to do it
After your ValidateRecord Processor, you can choose to route flow files which are 'invalid' to a separte log and route them to your sql table, you can do the same if they 'fail'. I am assuming from the 'error message' you mean the 'bullentin' which would occur when the Processor can neither validate or invalidate the flow file based on your schema.
A potential solution to this is to use the SiteToSiteBulletinReportingTask
Screenshot of SiteToSiteBulletinReportingTask
You can build a dataflow to receive these bulletin events, manipulate them as you want and store them in a location of your choice for your auditing needs.
From the sounds of it, the SiteToSiteBulletinReportingTask should be able to achieve what you want. To implement this, add a iteToSiteBulletinReportingTask to the 'Reporting Tasks' in the NiFi Settings: Reporting Tasks in NiFi Settings
You can name your input port and have it flow towards your SQL store and you should have what you're after.
You need to allow NiFi nodes to receive data via site-to-site on the input port and you also need to grant the correct permissions on the root process group so the nodes are able to see the component, view and modify the data.
Side note: I would usually log everything, and have all failures and invalid route to log files, which I put to store, e.g. HBase/SQL. One suggestion I've seen is configure the logging subsystem to additionally send specific error categories to your destination of choice (e.g. active notification vs passive parsing of logs). NiFi is leveraging a very flexible logback system (an evolution of log4j). The best part - changes to the $NIFI_HOME/conf/logback.xml configuration file do not require an instance restart, will be picked up within 30 seconds or less.

how to automate the back out queue clearance in WMQ?

in production support i have to delete every day messages from 100 of queues from different queue managers in WMQ (WMQ IBM) manually. can it be automated such that by running a script message in back out queue should be deleted .?
my requirements :
1. by giving queue name i should be able to delete message from queue and date should be the selecting criteria.
There is quite an exhaustive list of possible solutions here:
http://www.capitalware.com/rl_blog/?p=1616
You should take a look at the options with the Java or C program, by modifying the program you can implement your 2nd requirement to delete messages sent on a given date.
my requirements : 1. by giving queue name i should be able to delete
message from queue and date should be the selecting criteria.
If you need to delete messages older than a particular date then the blog posting will not help. You will need to use a program like MQ Batch Toolkit.
i.e. To delete messages older 2 days then you would issue:
mqbt ClearQByTime -p MQA1 -q TEST.Q1 -d 2
If you need to run it on a daily basis then put the command into the scheduler on the server.

Spring batch - One transaction over whole Job

I am using Spring-Batch to execute a batch that creates some objects in the database, creates a file from these objects and then sends the file to a FTP server.
Thus, I have 2 steps : One that reads conf from DB, insert into the DB and creates the file ; the second sends the file to the FTP server.
The problem is when there is a problem with the FTP server, I can't rollback the transaction (to cancel the new inserts into the DB).
How can I configure my Job to use just one transaction over the different steps?
This is a bad idea due to transactional nature of spring-batch.
IMHO a simple solution should be to mark data saved in step 1 with a token generated when job starts and, if your FTP upload will fail, move to a cleanup step to delete all data with token.
A agree with bellabax: this is a bad idea.
But I wouldn't do a 3rd cleanup step because this step may also fail, letting the transaction not rollbacked.
You could mark the inserted entries with a flag that indicates the entries has not yet been sent to the FTP.
The 3rd step would switch the flag to indicate that these entries has been sent to the FTP.
Then you just need a cron/batch/4th cleaning step/whatever that would remove all entries that haven't been sent to the FTP

Resources