I'm working on an app which runs remote tasks (Task A and Task B) on a few (10) servers (s1 to s0) and once the parts are complete Task C is run on the local server. All these tasks could take a while to finish (from a minute to an hour) but task A takes between 4 to 20 times longer than task B (and this could change for each run).
I don't wish to run more than one task on any server at a time.I'm trying to be efficient with how this works so I think laravel8's Queue would serve my purpose. My thinking I have say 5 queues q1,q2,q3,q4,q5 I then add taskA for queues 1 to 3 for the first 3 servers and task B for s4 and s5 I would then repeat for all tasks. After this my Queues would look like this
q1 q2 q3 q4 q5
s1ta s2ta s3ta s4tb s5tb
s6ta s7ta s8ta s9tb s0tb
s1tb s2tb s3tb s4ta s4ta
s6tb s7tb s8tb s9ta s0ta
--tc
While this looks good what if q1 gets to task C but other queues are running? Is there a way I can trigger task C when all queues are empty? Is there a better way to do this? Should I use something else except Queues for this and if so what? Is an event triggered when a job in a queue finishes?
I await your thoughts and recommendations.
thanks
Craig
*** EDIT ***
Thinking more on this it would make sense to run task a and task b on the same queue after each other so:
q1 q2 q3 q4 q5
s1ta s2ta s3ta s4ta s5ta
s1tb s2tb s3tb s4tb s5tb
s6ta s7ta s8ta s9ta s0ta
s6tb s7tb s8tb s9tb s0tb
--tc
but the issue with Task C would still remain and it would be good if a task could move to an empty Queue if it hasn't started. Right now I've no idea where to begin...
Related
I am looking to run a service mix of 3 APIs with different loads sequentially. Q1 - 20%, Q2 -10%, Q3 - 70%
However, Q1 in independent and Q2 is dependent on variable generated in Q3.
I have following setup. I cannot have multiple thread groups and need transaction controllers as I need to upload this to Storm Runner.
ThreadGroup - Threads 10, Loop 1
--- Q1ThroughPutController -20%
---Q1TransactionController
--- Q2ThroughPutController - 10%
---Q2TransactionController
--- Q3ThroughPutController - 70%
---Q3TransactionController
Current run looks like this:
Q3 - Fail
Q3 - Fail
Q3 - Fail
Q3 - Fail
Q1 - Fail
Q3 - Fail
Q1 - Pass
Q3 - Pass
Q2 - Pass
Q3 - Pass
According to JMeter Documentation:
Properties are not the same as variables. Variables are local to a thread; properties are common to all threads, and need to be referenced using the __P or __property function.
Your setup assumes that:
20% of users are executing Q1
10% of users are executing Q2
70% of users are executing Q3
so there is no way for user who ran Q3 to switch to Q1
You need to amend your correlation logic to use properties instead of variables, properties are global so they will be accessible to all threads, you will have to do some custom Groovy scripting using suitable JSR223 Test Elements
Alternative solution would be using Inter-Thread Communication Plugin to pass your variables between different threads and also implement some blocking logic so the virtual user will "wait" for the variable value before proceeding
I have a requirement where I need to execute 4 jobs parallel and when same items job is done in all 4 processors parallely then trigger the next processor for this I have used wait-notify
Flow is like
4 parallel jobs -> notify (release signal identifier = ${itemid}, signal counter name = ${processorname}) -> wait (release signal identifier = ${itemid}, target signal count = 4) and wait relationship is connected to the same wait processor-> next processor
This works for first time but I have noticed that wait queue is not cleared even after the target signal count condition is met and i guess that is the problem it is not working for subsequent flows.
It should clear the waiting queue once the criteria is met right?
Seeking for an advise on the below faced COA correlation issue.
Background: there is an application A which is feeding data to an application B via MQ (nothing special - remote queue def pointing to the local q def on remote QM). Where the sending app A is requesting COAs. That is a stable setup working for years:
App A -> QM.A[Q1] -channel-> QM.B[Q2] -> App B
Here:
Q1 is a remote q def pointing to the Q2.
Problem: there is an application C which requires exactly the same data feed which A is sending to B via MQ. => it is required to duplicate data feed considering the following constraint.
Constraint: neither code, nor app config of applications A and B could be changed - duplication of the data feed from A to B should be transparent for applications A and B - A puts messages to the same queue Q1 on QM.A; B gets messages from the same queue Q2 on the QM.B
Proposed solution: duplicate the feed on the MQ layer by creation of the Topic/subscirbers configuration on the QM of the app B:
App A -> QM.A[Q1] -channel-> QM.B[QA->T->{S2,S3}->{Q2,Q3}] -> {App B, QM.C[Q4] -> App C}
Here:
Q1 - has the rname property updated to point to the QA for Topic
instead of Q2
QA - Queue Alias for Topic T
T - Topic
S2, S3 - subscribers publishing data to the Q2 and Q3
Q2 - unchanged, the same local queue definition where App B consumes from
Q3 - remote queue definition pointing to the Q4
Q4 - local queue definition on the QM.C, the queue with copy of messages sent from A to B
With this set up duplication of the messages from the app A to the app B and C works fine.
But ... there is an issue.
Issue: application A is not able to correlate COAs and that is the problem.
I'm not sure if app A is not able to correlate COAs at all, or (what is more likely guess) it is not able to correlate additional COAs e.g. from the QM.C
Any idea or advise is very much appreciated.
I have made (localhost:8080)
scheduler: true
in one node to make it scheduler master.
Other node have schedule turned off (localhost:8000)
scheduler: false
How will "scheduler master" assign task to other node..??
I work this out. It was simple but simple is tough.
We just have to follow AH architecture.
First of all make sure AH is not using fakeredis.
Than server should not have nay taskprocessor
And all other worker should have one or more taskprocessors. You can find that in /config/tasks.js. Thn in /config/tasks.js name the queue in queue array on which worker should work.
Once they are sharing redis they will share tasks and start working on tasks in queue.
Can 1 Tasktracker run multiple JVMs?
Here is the scenario:
Assume there are 2 files (A & B) and 2 Data nodes (D1 & D2).
When you load A, assume it is getting split into A1 & A2 on D1 & D2
and when you load B, assume it is getting split into B1 & B2 on D1 & D2.
For some reason let us assume D1 is busy with some other tasks
and D2 is available and there are a couple of jobs which are submitted,
one using file A and the other one usign File B.
So now D2 is available and has blocks A2 & B2.
Will the JobTracker submit the code to TaskTracker on D2 and run the task for A2 and B2 at a time or
will it first run A2 and after it finishes it will run B2?
If so, again is it possible to run both the tasks in parallel which means 1 TaskTracker and 2 jvms, or will it create/spawn 2 TaskTrackers on D2?
By default Task Tracker spawns one JVM for each task.
You can reuse jvms by setting this configuration parameter: mapred.job.reuse.jvm.num.tasks
A task tracker (TT) can launch multiple map or reduce tasks in parallel on a single machine. By default TT launches 2 maps (mapreduce.tasktracker.map.tasks.maximum) and 2 reduce (mapreduce.tasktracker.reduce.tasks.maximum) tasks. The properties have to be configured in the mapred-default.xml.