Explicitly setting max jobs / flow limit on a BW sub-process - tibco

I know we can set max jobs and flow limit on a TIBCO starter process but is there anyway of explicitly setting it on a sub-process (non starter process)?

Max Jobs and Flow Limit can only be set on process starters or on spawned subprocesses. Flow control on regular (i.e. non-spawned) subprocesses is determined by the parent process starter's configuration and cannot be overridden.
If you want to be able to control the flow of a subprocess, I see 2 options:
Make it a spawnable process.
Make it an independent process with its own process starter (e.g. JMS Queue Receiver) and have the parent process invoke it with the
appropriate protocol (e.g. JMS). This way you can control the
process' flow control as you would do with any process starter.

I agree with Nicolas. However if for example let say that your flow allows 10 jobs max. to enter but then you want one job at a time to get executed, you could use a "Critical Section" to make sure only one job access the resources at any given time. (This is an example only)
"Critical section groups are used to synchronize process instances so that only one process instance executes the grouped activities at any given time. Any concurrently running process instances that contain a corresponding critical section group wait until the process instance that is currently executing the critical section group completes.
Critical Section groups are particularly useful for controlling concurrent access to shared variables (see Synchronizing Access to Shared Variables for more information). However, other situations may occur where you wish to ensure that only one process instance is executing a set of activities at a time. "

Related

kg.apc.jmeter.functions.FifoTimeout does not work if added to user.properties in jmeter

I am using inter thread communicator plugin to share data between two thread groups.
TG-1: generates an ID -> stores it in the queue name Q1 TG-2: picks an ID from queue -> does the processing
After some time when run duration of TG-1 is completed, it stops processing or storing ID in to Q1. TG-2 processed all the data in the queue and keep on waiting for new data in the Q1. However Q1 will not have any data. My expectation was when the run duration of TG-2 also completed. TG-2 should finish its job and exit. Why does TG-2 keep on waiting for data in Q1. This is causing the exhaustion of the heap space and test never stops. This is causing a serious issue.
To prevent this, I tried adding kg.apc.jmeter.functions.FifoTimeout=120 in user.properties file as suggested by Dmitri T in one of my previous question for the same thing. However this property is not taking effect. Has anybody else also experience the same thing with this plugin? What is the alternative?
We are not telepathic enough to guess what is your setup, what exact components of the Inter-Thread Communication Plugin you're using and how they're configured.
If you're using Functions - the timeout is working fine for the __fifoPop() function, just make sure to restart JMeter after amending the property. __fifoGet() one will just return an empty value if the queue is empty
If you're using jp#gc - Inter-Thread Communication PreProcessor - there is a possibility to specify the timeout directly in GUI
Also it is always possible to stop the test via Flow Control Action Sampler

Laravel Queues for multi user environment

I am using Laravel 5.1, and I have a task that takes around 2 minutes to process, and this task particularly is generating a report...
Now, it is obvious that I can't make the user wait for 2 minutes on the same page where I took user's input, instead I should process this task in the background and notify the user later about task completion...
So, to achieve this, Laravel provides Queues that runs the tasks in background (If I didn't understand wrong), Now for multi-user environment, i.e. if more than one user demands report generation (say there are 4 users), so being the feature named Queues, does it mean that tasks will be performed one after the other (i.e. when 4 users demand for report generation one after other, then 4th user's report will only be generated when report of 3rd user is generated) ??
If Queues completes their tasks one after other, then is there anyway with which tasks are instantly processed in background, on request of user, and user can get notified later when its task is completed??
Queue based architecture is little complicated than that. See the Queue provides you an interface to different messaging implementations like rabbitMQ, beanstalkd.
Now at any point in code you send send message to Queue which in this context is termed as a JOB. Now your queue will have multiple jobs which are ready to get out as in FIFO sequence.
As per your questions, there are worker which listens to queue, they get a job and execute them. It's up to you how many workers you want. If you have one worker your tasks will be executed one after another, more the workers more the parallel processes.
Worker process are started with command line interface of laravel called Artisan. Each process means one worker. You can start multiple workers with supervisor.
Since you know for sure that u r going to send notification to user after around 2 mins, i suggest to use cron job to check whether any report to generate every 2 mins and if there are, you can send notification to user. That check will be a simple one query so don't need to worry about performance that much.

Computing usage of independent cores and binding a process to a core

I am working with MPI, and I have a certain hierarchy of operations. For a particular value of a parameter _param, I launch 10 trials, each running a specific process on a distinct core. For n values of _param, the code runs in a certain hierarchy as:
driver_file ->
launches one process which checks if available processes are more than 10. If more than 10 are available, then it launches an instance of a process with a specific _param value passed as an argument to coupling_file
coupling_file ->
does some elementary computation, and then launches 10 processes using MPI_Comm_spawn(), each corresponding to a trial_file while passing _trial as an argument
trial_file ->
computes work, returns values to the coupling_file
I am facing two dilemmas, namely:
How do I evaluate the required condition for the cores in driver_file?
As in, how do I find out how many processes have been terminated, so that I can correctly schedule processes on idle cores? I thought maybe adding a blocking MPI_Recv() and use it to pass a variable which would tell me when a certain process has been finished, but I'm not sure if this is the best solution.
How do I ensure that processes are assigned to different cores? I had thought about using something like mpiexec --bind-to-core --bycore -n 1 coupling_file to launch one coupling_file. This will be followed by something like mpiexec --bind-to-core --bycore -n 10 trial_file
launched by the coupling_file. However, if I am binding processes to a core, I don't want the same core to have two/more processes. As in, I don't want _trial_1 of _coupling_1 to run on core x, then I launch another process of coupling_2 which launches _trial_2 which also gets bound to core x.
Any input would be appreciated. Thanks!
If it is an option for you, I'd drop the spawning processes thing altogether, and instead start all processes at once.
You can then easily partition them into chunks working on a single task. A translation of your concept could for example be:
Use one master (rank 0)
Partition the rest into groups of 10 processes, maybe create a new communicator for each group if needed, each group has one leader process, known to the master.
In your code you then can do something like:
if master:
send a specific _param to each group leader (with a non-blocking send)
loop over all your different _params
use MPI_Waitany or MPI_Waitsome to find groups that are ready
else
if groupleader:
loop endlessly
MPI_Recv _params from master
coupling_file
MPI_Bcast to group
process trial_file
else
loop endlessly
MPI_BCast (get data from groupleader)
process trial file
I think, following this approach would allow you to solve both your issues. Availability of process groups gets detected by MPI_Wait*, though you might want to change the logic above, to notify the master at the end of your task so it only sends new data then, not already during the previous trial is still running, and another process group might be faster. And pinning is resolved as you have a fixed number of processes, which can be properly pinned during the usual startup.

How is wait_for_completion different from wakeup_interruptible

How is wait_for_completion different from wakeup_interruptible?
Actually the question is how completion chains is different from wait queues ?
It looks the same concept to me
completion structure internally uses the wait queues and locks.
completion structure was introduced to address a very common occurring scenario, where multiple threads are waiting on some event. Once that event happens, you want only one of the waiting thread to start running.
The key here is that kernel developers don't have to implement and maintain the waiting queue , which makes life of a kernel developer easy.
Adding on Harman answer, I would also say that those two functions are called in different context: wakeup_interruptible() will wake up all threads waiting on a wait_queue, whereas wait_for_completion() will wait until a specific task completes. Those are two different things to me.

MFC CEvent class member function SetEvent , difference with Thread Lock() function?

what i s the difference between SetEvent() and Thread Lock() function? anyone please help me
Events are used when you want to start/continue processing once a certain task is completed i.e. you want to wait until that event occurs. Other threads can inform the waiting thread about the completion of this task using SetEvent.
On the other hand, critical section is used when you want only one thread to execute a block of code at a time i.e. you want a set of instructions to be executed by one thread without any other thread changing the state at that time. For example, you are inserting an item into a linked list which involves multiple steps, at that time you don't want another thread to come and try to insert one more object into the list. So you block the other thread until first one finishes using critical sections.
Events can be used for inter-process communication, ie synchronising activity amongst different processes. They are typically used for 'signalling' the occurrence of an activity (e.g. file write has finished). More information on events:
http://msdn.microsoft.com/en-us/library/windows/desktop/ms686915%28v=vs.85%29.aspx
Critical sections can only be used within a process for synchronizing threads and use a basic lock/unlock concept. They are typically used to protect a resource from multi-threaded access (e.g. a variable). They are very cheap (in CPU terms) to use. The inter-process variant is called a Mutex in Windows. More info:
http://msdn.microsoft.com/en-us/library/windows/desktop/ms682530%28v=vs.85%29.aspx

Resources