Wait-Notify for parallel & sequential processing in Nifi - apache-nifi

I have a requirement where I need to execute 4 jobs parallel and when same items job is done in all 4 processors parallely then trigger the next processor for this I have used wait-notify
Flow is like
4 parallel jobs -> notify (release signal identifier = ${itemid}, signal counter name = ${processorname}) -> wait (release signal identifier = ${itemid}, target signal count = 4) and wait relationship is connected to the same wait processor-> next processor
This works for first time but I have noticed that wait queue is not cleared even after the target signal count condition is met and i guess that is the problem it is not working for subsequent flows.
It should clear the waiting queue once the criteria is met right?

Related

Laravel Queue: multiple queues, multiple tasks, multiple remote servers

I'm working on an app which runs remote tasks (Task A and Task B) on a few (10) servers (s1 to s0) and once the parts are complete Task C is run on the local server. All these tasks could take a while to finish (from a minute to an hour) but task A takes between 4 to 20 times longer than task B (and this could change for each run).
I don't wish to run more than one task on any server at a time.I'm trying to be efficient with how this works so I think laravel8's Queue would serve my purpose. My thinking I have say 5 queues q1,q2,q3,q4,q5 I then add taskA for queues 1 to 3 for the first 3 servers and task B for s4 and s5 I would then repeat for all tasks. After this my Queues would look like this
q1 q2 q3 q4 q5
s1ta s2ta s3ta s4tb s5tb
s6ta s7ta s8ta s9tb s0tb
s1tb s2tb s3tb s4ta s4ta
s6tb s7tb s8tb s9ta s0ta
--tc
While this looks good what if q1 gets to task C but other queues are running? Is there a way I can trigger task C when all queues are empty? Is there a better way to do this? Should I use something else except Queues for this and if so what? Is an event triggered when a job in a queue finishes?
I await your thoughts and recommendations.
thanks
Craig
*** EDIT ***
Thinking more on this it would make sense to run task a and task b on the same queue after each other so:
q1 q2 q3 q4 q5
s1ta s2ta s3ta s4ta s5ta
s1tb s2tb s3tb s4tb s5tb
s6ta s7ta s8ta s9ta s0ta
s6tb s7tb s8tb s9tb s0tb
--tc
but the issue with Task C would still remain and it would be good if a task could move to an empty Queue if it hasn't started. Right now I've no idea where to begin...

Spring Batch: Terminating the current running job

I am having an issue in terminating the current running spring batch. I wrote
Set<Long> executions = jobOperator.getRunningExecutions("Job-Builder");
jobOperator.stop(longExecutions.iterator().next());`
in my code after going through the spring documentation.
The problem I am facing is at times the termination of the job is happening as expected and the other times the termination of job is not happening. In fact every time I call stop on joboperator it is updating the BATCH_JOB_EXECUTION table. When the termination happens successfully the status of the job is updating to STOPPED by killing the jobExecution in my batch process. The other times when it fails it is completing the rest of the different flows of the batch and updating the status to FAILED on BATCH_JOB_EXECUTION table.
But every time I call stop in the job operator I see a message in my console
2020-09-30 18:14:29.780 [http-nio-8081-exec-5] INFO o.s.b.c.l.s.SimpleJobOperator:428 - Aborting job execution: JobExecution: id=33058, version=2, startTime=2020-09-30 18:14:25.79, endTime=null, lastUpdated=2020-09-30 18:14:28.9, status=STOPPING, exitStatus=exitCode=UNKNOWN;exitDescription=, job=[JobInstance: id=32922, version=0, Job=[Job-Builder]], jobParameters=[{date=1601504064263, time=1601504064262, extractType=false, JobId=1601504064262}]
My project has a series of flows and steps with in it.
Over all my batch process looks like this:
JobBuilderFactory has 3 flows
Each flow has a stepbuilder and two tasklets.
each stepbuilder has a partitioner and a chunk(size is 100) based itemReader, itemProcessor and itemWriter.
I am calling the stop method when I am executing the very first flow in my jobBuilderFactory. The over all process to complete takes about 30 mins. So, it has close to around 20-25 mins from the time I call the stop method and the chunk size is 100 with in each and every flow and I am dealing with more than 500k records.
So, my question is why is jobExecution stopping at times when called stop methos(which is what I wanted) and why it isn't able to stop the jobExecution the remaining times.
Thanks in advance
So, my question is why is jobExecution stopping at times when called stop methos(which is what I wanted) and why it isn't able to stop the jobExecution the remaining times.
It's not easy to figure out the reason for that from what you shared, but I can give you a couple of notes about stopping jobs:
jobOperator.stop does not guarantee that the job stops, it only sends a stop signal to the job execution. From what you shared, you are not checking the returned boolean that indicates if the signal has been correctly sent or not, so you should be doing that first.
You did not share your code, but you need to use StoppableTasklet instead of Tasklet to make sure the stop signal is correctly sent to your steps.

VHDL: Help understanding time steps/states and concurrency

I'm normally a C#/Java programmer and I'm still having trouble fully wrapping my head around hardware description.
I have a register that loads in a value. Afterwards, a comparator compares the output of the register with the value '16'. If the value is less than or equal, I go to State_0, if it's greater than, I go to State_3.
I have a 'controlsignals' process running concurrently to my statetable process. For my control signals, I know that I have to set the enable for the register to high when I'm in State_2, so:
controlsignals: PROCESS (Tstep_Q)
BEGIN
.... initialisation ...
CASE Tstep_Q IS
.... other states ....
WHEN T2 => --define signals in time step T2
enRegister = '1';
For my state table:
statetable: PROCESS (Tstep_Q, regOutput)
BEGIN
CASE Tstep_Q IS
.... other states ....
WHEN T2 =>
IF ((regOutput - 16) > 0)
THEN Tstep_D <= T3;
ELSE Tstep_D <= T0;
END IF;
And near the end of my code I have:
fsmflipflops: PROCESS (Clock)
BEGIN
IF Clock'EVENT AND Clock = '1' THEN
Tstep_Q <= Tstep_D;
END IF;
END PROCESS;
reg: regn PORT MAP (somevalue, enReg, Clock, regOutput);
Since my state table and my control signals are concurrent blocks, my confusion is... will I first enable the register and then run the comparator to determine my next state, like I want my circuit to run (since the statetable is sensitive to regOutput)? Or would it be safer to create a new state after T2 where I have my comparator? Thank you in advance.
Concurrency of the comparator
Imagine that right after the clock edge, the state signal has been updated. You've got one clock period to do a comparison and set the next state.
Your 'statetable' is being evaluated at all times.
Timing of enRegister
Doing the comparison in T2 only makes sense if you can read the output of the register in the same clock cycle as you are setting the enable. This may be a problem, but your question does not contain the information to check that.
Sensitivity list of statetable
You want this process to run concurrently, so all its inputs need to go in the sensitivity list.
It looks like you are working from a decent reference and structuring your code well. I suspect that the sensitivity list is really the problem you are having - causing odd behaviour in simulation, so I'll keep this answer short and let you try to fix that.

Understanding delayed_job status

I've implemented long-running tasks in my Rails app using delayed_job along with delayed_job_web. My delayed_job configuration instructs jobs to be attempted once, and for failures to be retained:
config/initializers/delayed_job.rb:
Delayed::Worker.max_attempts = 1
Delayed::Worker.destroy_failed_jobs = false
I tried 2 test jobs that automatically raised errors, in order to see how failures behave. What I get is the following:
My expectation was that Failed jobs would have a count of 2, but that Enqueued / Working / Pending would all be 0. I can't find any documentation on what determines whether a job is Enqueued / Working / Pending, or even what the difference between Working and Pending is (the web interface describes both lists as "contains jobs currently being processed".)
Can anyone provide some clarity?
If you check https://github.com/ejschmitt/delayed_job_web/blob/master/lib/delayed_job_web/application/app.rb , you see the following (starting line 114):
when :working
'locked_at is not null'
when :failed
'last_error is not null'
when :pending
'attempts = 0'
end
Enqueued would be the total number of delayed jobs, i.e. Delayed::Job.count
Working jobs are those that have been locked by the delayed_job process and are currently being worked.
Failed are those that have a last_error
Pending are those jobs that have never been attempted.

How resque checks when to run a job?

I have found the Resque:
https://github.com/elucid/resque-delayed
And I can see that I can schedule delayed Job. My question is, how does it check for delayed jobs? If I have 5000 delayed jobs in one month time, I hope it doesn't check every 10 seconds all delayed jobs.
So how is it being done?
It does not have to check all the delayed jobs. It maintains a sorted set in Redis, the jobs being sorted by their scheduled time. See the code at:
https://github.com/elucid/resque-delayed/blob/master/lib/resque-delayed/resque-delayed.rb
Each time the daemon awakes, only the first item of the set needs to be checked (using a ZRANGEBYSCORE command). The daemon fetches the relevant jobs one by one, until the polling query returns no result, then it sleeps again.
Performance could be further improved by fetching the jobs n by n. It could be implemented using a server-side Lua script as a polling query:
local res = redis.call('ZRANGEBYSCORE',KEYS[1], "-inf", ARGV[1], 'LIMIT', 0, 10 )
if #res > 0 then
redis.call( 'ZREMRANGEBYRANK', KEYS[1], 0, #res-1 )
return res
else
return false
end
In one roundtrip, this script gets 10 jobs (if available), and delete them from the zset. Much better than the 11 ZRANGEBYSCORE and 10 ZREM, currently required by Resque-delayed.

Resources