I'm creating queue dynamically i.e. 1 2 3 4 5, This queue are created based on the user's request. each request create new queue.
Now all this queue are running one by one only, I would like to run parallel, So that each user can see their jobs are running rather than waiting for other user's task to be complete.
I have resolved the issue by creating multiple server with respect to queue.
for example, couple of queue on 1 server and few on another server.
code:
app.usehangfireserver(options1);
app.usehangfireserver(options2);
Related
I am running Laravel in Docker, and I have 3 workers with different name. I want to know if those queues are active and can fire a job.
I have 3 workers. default, high and important. I want to know if my important queue is running. Seems like when in app/Console/Kernel.php the onQueue('important') is not running.
I have a JMeter script that is executed in distributed mode with 4 nodes. One of them is the controller and does not do any request and the other 3, as workers do the requests.
I can currently set one of the workers as a master worker, setting a property in the user.properties file for that specific worker. This "master" worker perform some requests that has to be done only once, so these requests can't be done by the other workers.
Now I have the need of extract some values from the response of these unique request and send this information to the other slaves.
Is it possible to do this?
How can data be sent form one worker to the other workers at run time?
You can use HTTP Simple Table Server plugin and populate it with data from the "master" worker using ADD command so once you do the setup of the pre-requisites all other workers including the master could access the generated data via READ command
HTTP Simple Table server can be installed using JMeter Plugins Manager
No it's not possible.
The communication between Controller and servers is very reduced:
Controller send start / stop / shutdown commands to servers
Servers send sample result to controller
That's it.
To communicate you'll need to use 3rd party tiers like Redis DB or similar means.
I have following problem to solve:
Multiple Users can submit Jobs to a Queue via a Web Interface.
This Jobs are then stored in the Database via the database Queue Driver.
Now my problem is: I want the Queue to run all Jobs until inside a job I say something like $queue->pause() because for the next Job to run I need some confirmation from the User.
How would I do something like this?
run jobs
inside one job the job determines that it needs some confirmation from the user
halt the queue and keep that job which needs the confirmation in the queue
any user on the website can press a button which then would delete this confirmation job and start the queue again.
My current "solution" which didn't work was this:
create 2 different job types:
ImageProcessingJob
UserNotificaitonJob
The queue worked all ImageProcessingJobs until it hit a UserNotificationJob.
Inside the UserNotificationJob->handle() I called Artisan::call("queue:restart"); which stopped the Queue.
The problem with this solution is: The UserNotificationJob also got deleted. So if I would start the Queue again the Queue would immediately start with the remainig ImageProcessingJobs without waiting for the actual confirmation.
I'm also open to other architectural solutions without a Queue system.
One approach which avoids pausing the queue, is to have the UserNotificationJob wait on a SyncEvent (the SyncEvent is set when then confirmation comes back from the user). You can have this wait timing-out if you like, but then you need to repost the job to the queue. If you decide to timeout and repost, you can use job chaining to setup dependencies between jobs so that nothing can be run until the UserNotificationJob competes.
Another approach might be to simply avoid posting the remaining jobs until the confirmation is sent from the user.
I haven't yet actually used Resque. I have the following questions and assumptions that I'd like verified:
1) I understand that you can have a multiserver architecture by configuring each of your resque instances to point to a central redis server. Correct?
2) Does this mean that any resque instance can add items to a queue and any workers can work on any of those queues?
3) Can multiple workers respond to the same item in a queue? I.e. one server puts "item 2 has been updated" in a queue, can workers 1, 2, and 3, on different servers, all act on that? Or would I need to create separate queues? I kind of want a pub/sub types tasks.
4) Does the Sinatra monitor app live on each instance of Resque? Or is there just one app that knows about all the queues and workers?
5) Does Resque know when a task is completed? I.e. does the monitor app show that a task is in process? Or just that a worker took it?
6) Does the monitor app show completed tasks? I.e. if a task complete quickly will I be able to see that at some point in the recent past that task was completed?
7) Can I programmatically query whether a task has been started, is in progress, or is completed?
As I am using resque extensively in our project, here are few answers for your query:
I understand that you can have a multi-server architecture by configuring each of your resque instances to point to a central redis server. Correct?
Ans: Yes you have multiple servers pointing to single resque server. I am running on similar architecture.
Does this mean that any resque instance can add items to a queue and any workers can work on any of those queues?
Ans: This depends on how you are configuring your servers, you have to create queues and them assign workers to them.
You can have multiple queues and each queue can have multiple workers working on them.
Can multiple workers respond to the same item in a queue? I.e. one server puts "item 2 has been updated" in a queue, can workers 1, 2, and 3, on different servers, all act on that? Or would I need to create separate queues? I kind of want a pub/sub types tasks.
Ans: This again based on your requirement, if you want to have a single queue and all workers working on it, this is also valid.
or if you want separate queue for each server you can do that also.
Any server can put jobs in any queue but only assigned workers will pickup and work on that job
Does the Sinatra monitor app live on each instance of Resque? Or is there just one app that knows about all the queues and workers?
Ans: Sinatra monitor app gives you an interface where you can see all workers/queues related info such as running jobs, waiting jobs, queues and failed jobs etc.
Does Resque know when a task is completed? I.e. does the monitor app show that a task is in process? Or just that a worker took it?
Ans: It does, basically resque also maintains internal queues to manage all the jobs.
Ans: Yes it shows every stats about the Job.
Does the monitor app show completed tasks? I.e. if a task complete quickly will I be able to see that at some point in the recent past that task was completed?
Ans: Yes you can do
for example to know which workers are working use Resque.working, similarly you can check their code base and utilise anything.
Resque is a very powerful library we are using for more than one year now.
Cheers happy coding!!
I have recently started working on distributed computing for increasing the computation speed. I opted for Celery. However, I am not very familiar with some terms. So, I have several related questions.
From the Celery docs:
What's a Task Queue?
...
Celery communicates via messages, usually using a broker to mediate between clients and workers. To initiate a task the client adds a message to the queue, the broker then delivers that message to a worker.
What are clients (here)? What is a broker? Why are messages delivered through a broker? Why would Celery use a backend and queues for interprocess communication?
When I execute the Celery console by issuing the command
celery worker -A tasks --loglevel=info --concurrency 5
Does this mean that the Celery console is a worker process which is in charge of 5 different processes and keeps track of the task queue? When a new task is pushed into the task queue, does this worker assign the task/job to any of the 5 processes?
Last question first:
celery worker -A tasks --loglevel=info --concurrency 5
You are correct - the worker controls 5 processes. The worker distributes tasks among the 5 processes.
A "client" is any code that runs celery tasks asynchronously.
There are 2 different types of communication - when you run apply_async you send a task request to a broker (most commonly rabbitmq) - this is basically a set of message queues.
When the workers finish they put their results into the result backend.
The broker and results backend are quite separate and require different kinds of software to function optimally.
You can use RabbitMQ for both, but once you reach a certain rate of messages it will not work properly. The most common combination is RabbitMQ for broker and Redis for results.
We can take analogy of assembly line packaging in a factory to understand the working of celery.
Each product is placed on a conveyor belt.
The products are processed by machines.
At the end all the processed product is stored in one place one by one.
Celery working:
Note: Instead of taking each product for processing as they are placed on convey belt, In celery the queue is maintained whose output will be fed to a worker for execution one per task (sometimes more than one queue is maintained).
Each request (which is a task) is send to a queue (Redis/Rabbit MQ) and an acknowledgment is send back.
Each task is assigned to a specific worker which executes the task.
Once the worker has finished the task its output is stored in the result backend (Redis).