Consumer Group in Chronice Queue - chronicle

Is it possible to use groups in chronicle consumer, so that multiple instances watching the queue whose message is consumed once for every message if these instances are grouped together?

You can add padding to the excerpt in the queue to hold the worker to process the message.
To make this dynamic, you can make it 0 for example and each worker use a CAS from 0 to their id to assign as each worker is available. This will succeed for only one worker and record which one picked up the work. i.e. it is the reader which writes it's id atomically. This only takes a fraction of a microsecond.

Related

Is there anyway to check current bulk queue size Opensearch?

My Opensearch sometimes reaches the error "429 Too Many Requests" when writing data. I know there is a queue, when the queue is full it will show that error. So is there any Api to check that bulk queue status, current size...? Example: queue 150/200 (nearly full)
Yes, you can use the following API call
GET _cat/thread_pool?v
You will get something like this, where you can see the node name, the thread pool name (look for write), the number of active requests currently being carried out, the number of requests waiting in the queue and finally the number of rejected requests.
node_name name active queue rejected
node01 search 0 0 0
node01 write 8 2 0
The write queue can handle as many requests as 1 + number of CPUs, i.e. as many can be active at the same time. If active is full and new requests come in, they go directly in the queue (default size 10000). If active and queue are full, requests start to be rejected.
Your mileage may vary, but when optimizing this, you're looking at:
keeping rejected at 0
minimizing the number of requests in the queue
making sure that active requests get carried out as fast as possible.
Instead of increasing the queue, it's usually preferable to increase the number of CPU. If you have heavy ingest pipelines kicking in, it's often a good idea to add ingest nodes whose goal will be to execute that pipeline instead of on the data node.

Laravel Queue start a second job after first job

In my Laravel 5.1 project I want to start my second job when first will finished.
Here is my logic.
\Queue::push(new MyJob())
and when this job finish I want to start this job
\Queue::push(new ClearJob())
How can i realize this?
If you want this, you just should define 1 Queue.
A queue is just a list/line of things waiting to be handled in order,
starting from the beginning. When I say things, I mean jobs. - https://toniperic.com/2015/12/01/laravel-queues-demystified
To get the opposite of what you want: async executed Jobs, you should define a new Queue for every Job.
Multiple Queues and Workers
You can have different queues/lists for
storing the jobs. You can name them however you want, such as “images”
for pushing image processing tasks, or “emails” for queue that holds
jobs specific to sending emails. You can also have multiple workers,
each working on a different queue if you want. You can even have
multiple workers per queue, thus having more than one job being worked
on simultaneously. Bear in mind having multiple workers comes with a
CPU and memory cost. Look it up in the official docs, it’s pretty
straightforward.

In Spring how to configure a queue with multiple conditions

I have configured a queue in my .XML file,in a way that it would b populated after processing records.I need to add a condition to the queue that it becomes active only when queue depth reaches 1000 or delay time is reached.? Is it possible? I am not using jms.

Solution to slow consumer(eventProcessor) issue in LMAX Disruptor pattern

While using the disruptor, there may be a consumer(s) that is lagging behind, and because of that slow consumer, the whole application is affected.
Keeping in mind that every producer(Publisher) and consumer(EventProcessor) is running on a single thread each, what can be the solution to the slow consumer problem?
Can we use multiple threads on a single consumer? If not, what is a better alternative?
Generally speaking use a WorkerPool to allow multiple pooled worker threads to work on a single consumer, which is good if you have tasks that are independent and of a potentially variable duration (eg: some short tasks, some longer).
The other option is to have multiple independent workers parallel process over the events, but each worker only handle modulo N workers (eg 2 threads, and one thread processes odd, one thread processes even event IDs). This works great if you have consistent duration processing tasks, and allows batching to work very efficiently too.
Another thing to consider is that the consumer can do "batching", which is especially useful for example in auditing. If your consumer has 10 events waiting, rather than write 10 events to an audit log independently, you can collect all 10 events and write them at the same time. In my experience this more than covers the need to run multiple threads.
Try to separate slow part to other thread (I/O, not O(1) or O(log) calculations, etc.), or to apply some kind of back pressure when the consumer is overloaded (by yielding or temporary parking producers, replying with 503 or 429 status codes, etc.):
http://mechanical-sympathy.blogspot.com/2012/05/apply-back-pressure-when-overloaded.html
Use a set of identical eventHandlers. To avoid more than 1 eventHandler acting upon a single event, I use the following approach.
Create a thread pool of size Number of cores in the system
Executor executor = Executors.newFixedThreadPool(Runtime.getRuntime().availableProcessors()); // a thread pool to which we can assign tasks
Then create a handler array
HttpEventHandler [] handlers = new HttpEventHandler[Runtime.getRuntime().availableProcessors()];
for(int i = 0; i<Runtime.getRuntime().availableProcessors();i++){
handlers[i] = new HttpEventHandler(i);
}
disruptor.handleEventsWith(handlers);
In the EventHandler
public void onEvent(HttpEvent event, long sequence, boolean endOfBatch) throws InterruptedException
{
if( sequence % Runtime.getRuntime().availableProcessors()==id){
System.out.println("-----On event Triggered on thread "+Thread.currentThread().getName()+" on sequence "+sequence+" -----");
//your event handler logic
}

setting two queues in torque?

I have one queue called "batch" in a torque setup. I want to create a new queue
called "db" for debugging jobs. "db" queue will have several restrictions such as
maximum CPU time of 10 min, etc. Both queues would use the same nodes in principle.
I can create the new queue with the command "qmgr" there is not problem with that.
My question is, would there
be any issue if both queues are using the same nodes? I don't know if there could be
intereference between two processes comming from different queues.
Usually what I observe in
supercomputers is that they use different nodes for different queues, but in our
case we have only a small cluster and it doesn't make sense to share resources
between queues.
thanks.
Yes that should be fine:
If you don't specify which nodes belong to which queue, then all queues apply to all nodes.
qmgr
create queue db
set queue db resources_default.walltime=00:10:00
set queue db queue_type = Execution
set queue enabled = True
set queue started = True
create queue batch
set queue batch queue_type = Execution
set queue enabled = True
set queue started = True
There is no issue with using more than one queue that can be running jobs on the same nodes. (This is the case for most queues) As a general rule, queues are meant to house jobs and not nodes, and making it so that only one queue runs jobs on nodes requires some extra work (although it is certainly possible).

Resources