Duplicate job name in the beanstalk queue - beanstalkd

Our system process many jobs from the queue and there are times that those jobs were not yet finish processing. There is a chance that our system will put jobs with the same name of the jobs that are currently process.
Is there a checker that will tell us that the job with the same name is already in the queue before we add it in the queue?
Thanks guys!

Beanstalkd does not have the facility to look things up within it - it's a job-queue, not a giant array. Other things can be used in conjunction with it though, that do allow for random access to data to record if something has already been done.
If you know all the jobs have a specific identifier, you could put them into Redis or Memcached, probably with some form of prefix, and possibly an expiry beyond which they won't be stored.
Redis also allows for other data structures that could help as well, such as Bloom Filters and the Redis-native Hyperloglog.

Related

Scheduling tasks/messages for later processing/delivery

I'm creating a new service, and for that I have database entries (Mongo) that have a state field, which I need to update based on a current time, so, for instance, the start time was set to two hours from now, I need to change state from CREATED -> STARTED in database, and there can be multiple such states.
Approaches I've thought of:
Keep querying database entries that are <= current time and then change their states accordingly. This causes extra reads for no reason and half the time empty reads, and it will get complicated fast with more states coming in.
I write a job scheduler (I am using go, so that'd be not so hard), and schedule all the jobs, but I might lose queue data in case of a panic/crash.
I use some products like celery, have found a go implementation for it https://github.com/gocelery/gocelery
Another task scheduler I've found is on Google Cloud https://cloud.google.com/solutions/reliable-task-scheduling-compute-engine, but I don't want to get stuck in proprietary technologies.
I wanted to use some PubSub service for this, but I couldn't find one that has delayed messages (if that's a thing). My problem is mainly not being able to find an actual name for this problem, to be able to search for it properly, I've even tried searching Microsoft docs. If someone can point me in the right direction or if any of the approaches I've written are the ones I should use, please let me know, that would be a great help!
UPDATE:
Found one more solution by Netflix, for the same problem
https://medium.com/netflix-techblog/distributed-delay-queues-based-on-dynomite-6b31eca37fbc
I think you are right in that the problem you are trying to solve is the job or task scheduling problem.
One approach that many companies use is the system you are proposing: jobs are inserted into a datastore with a time to execute at and then that datastore can be polled for jobs to be run. There are optimizations that prevent extra reads like polling the database at a regular interval and using exponential back-off. The advantage of this system is that it is tolerant to node failure and the disadvantage is added complexity to the system.
Looking around, in addition to the one you linked (https://github.com/gocelery/gocelery) there are other implementations of this model (https://github.com/ajvb/kala or https://github.com/rakanalh/scheduler were ones I found after a quick search).
The other approach you described "schedule jobs in process" is very simple in go because goroutines which are parked are extremely cheap. It's simple to just spawn a goroutine for your work cheaply. This is simple but the downside is that if the process dies, the job is lost.
go func() {
<-time.After(expirationTime.Sub(time.Now()))
// do work here.
}()
A final approach that I have seen but wouldn't recommend is the callback model (something like https://gitlab.com/andreynech/dsched). This is where your service calls to another service (over http, grpc, etc.) and schedules a callback for a specific time. The advantage is that if you have multiple services in different languages, they can use the same scheduler.
Overall, before you decide on a solution, I would consider some trade-offs:
How acceptable is job loss? If it's ok that some jobs are lost a small percentage of the time, maybe an in-process solution is acceptable.
How long will jobs be waiting? If it's longer than the shutdown period of your host, maybe a datastore based solution is better.
Will you need to distribute job load across multiple machines? If you need to distribute the load, sharding and scheduling are tricky things and you might want to consider using a more off-the-shelf solution.
Good luck! Hope that helps.

Turn recovery on after first message

I have a persistent actor which receives many messages. Fist message is CREATE (case class) and next messages are UPDATEs (case classes). So if it receives CREATE then it should not go into persistence to run recovery because the storage is empty for this actor. It's performance wasting from my perspective.
Is there any possibility to do not call recovery for particular input message (the first one which is CREATE), please?
A persistent actor will always have to hit the database, because there is no other way to know whether it having existed before - it could have been created in a previous instance of the application that was stopped or it could have been created on a different node in a cluster.
In general a good pattern for performance is to keep the actor in memory after it has been hit the first time, as that will allow as fast responses as possible. The most common way to do this is using Cluster Sharding (which you can read more about in the docs here: https://doc.akka.io/docs/akka/current/cluster-sharding.html?language=scala#cluster-sharding
I have never heard of anyone seeing the hit for an empty persistent actor as a performance problem and I'm not sure it is possible to solve that in a general way, so if you have such a problem and somehow can know the actor was never created before you can not do that with Akka Persistence but would have to build a special solution for that yourself.

How to avoid the same queue job being processed more than once when scaled across multiple dynos on Heroku

We have a Node.js application running loopback, the main purpose of which is to process orders received from the client. Currently the entire order process is handled during the single http request to make the order, including the payment, insertion into the database and sending confirmation emails etc.
We are finding that this method, whilst working at the moment, lacks scalability - the application is going to need to process, potentially, thousands of orders per minute as it grows. In addition, our order process currently writes data to our own database, however we are now looking at third party integrations (till systems) over which we have no control of the speed or availability.
In addition, we also currently have a potential race condition; we have to assign a 'short code' to each order for easy reference by the client - these need to rotate, so if the starting number is 1 and the maximum is 100, the 101st order must be assigned the number 1. At the moment we are looking at the previous order and either incrementing the previous reference by 1 or setting it back to the start - obviously this is fine at the moment due to the low traffic - however as we scale this could result in multiple orders being assigned the same reference number.
Therefore, we want to implement a queue to manage all of this. Our app is currently deployed on Heroku, where we already use a worker process for some of the monthly number crunching our app requires. Whilst having read some of the Heroku articles on implementing a queue (https://devcenter.heroku.com/articles/asynchronous-web-worker-model-using-rabbitmq-in-node, https://devcenter.heroku.com/articles/background-jobs-queueing) it is not clear how, over multiple worker dynos, we would ensure the order in which these queued items are processed and that the same job is not processed more than once by multiple dynos. The order of processing is not so important, however the lack of repetition is extremely important as if two orders are processed concurrently we run the risk of the above race condition.
So essentially my question is this; how do we avoid the same queue job being processed more than once when scaled across multiple dynos on Heroku?
What you need is already provided by RabbitMQ, the message broker used by the CloudAMQP add-on of Heroku.
You don't need to worry about the race condition of multiple workers. A job placed onto the queue is stored until a consumer retrieves it. When a worker consumes a job from the queue, no other workers will be able to consume it.
RabbitMQ manages all such aspects of message queing paradigm.
A couple of links useful for your project:
What is RabbitMQ?
Getting started with RabbitMQ and Node.js

Solution for composite events with Apache Kafka?

Architecture question: We have an Apache Kafka based eventing system and multiple systems producing / sending events. Each event has some data including an ID and I need to implement a "ID is complete"-event. Example:
Event_A(id)
Event_B(id)
Event_C(id)
are received asynchonrously, and only once all 3 events are received, I need to send a Event_Complete(id). The problem is that we have multiple clusters of consumers and our database is eventual consistent.
A simple way would be to use the eventually consistent DB to store which events we have for each ID and add a "cron" job to catch race conditions eventually.
It feels like a problem that might have been solved out there already. So my question is, is there a better way to do it (without introducing a consistent datastore to the picture)?
Thanks a bunch!

Usecases for mapred.job.queue.name

What are the real world use cases on using map reduce job queues i.e. the value of mapred.job.queue.name property. I see default being used as the value always.
mapred.job.queue.name is what you use to assign a job to a particular queue. By default all jobs go to "default" queue. However, it is possible to create hierarchical queue. Like root, root.q1, root.q1.q1a and so on.
Each of these queues can have its own set of attributes to ensure certain priority.
A real world scenario will be when you have multiple stakeholder asking for reports on same set of infrastructure. For example, at my work place , we have the data scientist trying to run various kind of research job, the customer support team looking for various daily and weekly figures and then there are real jobs that supports the day to day business. It is at the heart that the infrastructure should be able to support the best it can.
Having various queues with different priorities just makes it easy for Hadoop to decide what to do next when a processor is available, or how much it can use.
So data scientist will assign to "Data Analyst" queue, marketing team will assign to "Marketing" queue. It is possible to change priority of a queue depending on time of the day.
The Map/Reduce system always supports atleast one queue with the name as default. Hence, this parameter's value should always contain the string default. Some job schedulers supported in Hadoop, like the Capacity Scheduler, support multiple queues. If such a scheduler is being used, the list of configured queue names must be specified here. Once queues are defined, users can submit jobs to a queue using the property name mapred.job.queue.name in the job configuration. There could be a separate configuration file for configuring properties of these queues that is managed by the scheduler. Refer to the documentation of the scheduler for information on the same.
Ref: http://hadoop.apache.org/docs/r0.19.1/cluster_setup.html

Resources