Deleting a periodic job in sidekiq - ruby

I am trying to delete a Sidekiq Enterprise periodic job for an app, and I'm not sure how one goes about deleting the periodic job itself after deleting the schedule from the initialize and deleting the worker job.
I see this answer from earlier but the app in question has other jobs (both periodic and regular sidekiq jobs) and I cannot just globally blow away all scheduled periodic jobs and would prefer to not have to totally shut down and restart sidekiq either. Is there a way I can just get the specific job I am deleting out of redis so that it will no longer try to run at the next scheduled time?

You have to deploy your code change and restart Sidekiq for it to pick up periodic changes.

Related

Laravel 7 - Stop Processing Jobs and Clear Queue

I have a production system on AWS and use Laravel Forge. There is a single default queue that is processing Jobs.
I've created a number jobs and now wish to delete them (as they take many hours to complete and I realize my input data was bad). I created a new job with good data, but it won't be processed until all the others have finished.
How can I delete all jobs?
It was previously set up using redis queue driver. I could not figure out how to delete jobs, so I switched the driver to database, and restarted the server, thinking that this would at least get the jobs to stop processing. However, much to my dismay, they continue to be processed :-(
I even deleted the worker from the forge ui and restarted the server: the jobs still process.
Why do jobs continue to be processed?
How can I stop them?
You can use:
php artisan queue:clear redis
It will clear all jobs from default queue on redis connection. If you put jobs in other queue then you should specify queue name for example:
php artisan queue:clear redis --queue=custom_queue_name

Ensuring that laravel scheduled cron jobs have run in production

I have a scheduled cron job that runs laravel's task scheduler in a laravel forge production server. This cron job is ran every minute to ensure that any jobs that have been scheduled in the scheduler is ran. The task scheduler has multiple jobs that all execute at different times.
Essentially what I want is a way to monitor the cron job in production so that if it fails or isn't running I'm notified via an alert e.g. slack, sms. Without any alert I wouldn't be able to know if the cron job isn't running without manually checking. What's the best way of achieving this? Thanks.
If I get you right, you may take advantage of Pinging URLs to ping a specific url after your cron job is run. Something like following:
$schedule->command('emails:send')
->daily()
->thenPing($url);
Now you can either build your endpoint ($url) to make sure that it has been hit daily or use any of many free/paid cron monitoring services.

How to deploy laravel into a docker container while there are jobs running

We are trying to migrate our laravel setup to use docker. Dockerizing the laravel app was straight forward however we ran into an issue where if do a deployment while scheduled jobs are running they would be killed since the container is destroyed. Whats the best practice here? Having a separate container to run the laravel scheduler doesnt seem like it would solve the problem.
Run the scheduled job in a different container so you can scale it independently of the laravel app.
Run multiple containers of the scheduled job so you can stop some to upgrade them while the old ones will continue processing jobs.
Docker will send a SIGTERM signal to the container and wait for the container to exit cleanly before issuing SIGKILL (the time between the two signals is configurable, 10 seconds by default). This will allow to finish your current job cleanly (or save a checkpoint to continue later).
The plan is to stop old containers and start new containers gradually so there aren't lost jobs or downtime. If you use an orchestrator like Docker Swarm or Kubernetes, they will handle most of these logistics for you.
Note: the laravel scheduler is based on cron and will fire processes that will be killed by docker. To prevent this have the scheduler add a job to a laravel queue. The queue is a foreground process and it will be given the chance to stop/save cleanly by the SIGTERM that it will receive before being killed.

How to move a beanstalkd job in the ready queue to the front if the queue is very big and is taking a while to process?

I had a quick question in regards to a beanstalkd queue.
So say I have 500,000 ready jobs in the beanstalk queue which are just waiting be processed and at the same time more jobs are being added to this queue. All of these jobs have the same priority.
Is it possible to move a job in the ready queue so that it can be processed before all of the other jobs in that queue?
I've just started using beanstalk and I was wondering whether this is possible to do in beanstalk?
I'm on a linux environment.
I guess I could delete that specific job and reinsert it with a priority that will allow it to be processed first but I would like to avoid doing that yet unless there is a command that will allow me to do that.
Please let me know if more information is required and I thank you in advance for your help. :)
There is no such command currently in Beanstalkd.
This is what different priorities are used for.
Beanstalk has a new command as of version 1.12: reserve-job.
A job can be reserved by its id. Once a job is reserved for the client, the client has limited time to run (TTR) the job before the job times out. When the job times out, the server will put the job back into the ready queue. The command looks like this:
reserve-job <id>\r\n
<id> is the job id to reserve
This should immediately return one of these responses:
NOT_FOUND\r\n if the job does not exist or reserved by a client or is not either ready, buried or delayed.
RESERVED <id> <bytes>\r\n<data>\r\n. See the description for the reserve command.
Assuming you know the ID of the job that needs the priority adjustment, you can connect to beanstalk, reserve that specific job, then release it with a new priority:
release <id> <pri> <delay>

Event Scheduler in PostgreSQL?

Is there a similar event scheduler from MySQL available in PostgreSQL?
While a lot of people just use cron, the closest thing to a built-in scheduler is PgAgent. It's a component to the pgAdmin GUI management tool. A good intro to it can be found at Setting up PgAgent and doing scheduled backups.
pg_cron is a simple, cron-based job scheduler for PostgreSQL that runs
inside the database as an extension. A background worker initiates
commands according to their schedule by connecting to the local
database as the user that scheduled the job.
pg_cron can run multiple jobs in parallel, but it runs at most one
instance of a job at a time. If a second run is supposed to start
before the first one finishes, then the second run is queued and
started as soon as the first run completes. This ensures that jobs run
exactly as many times as scheduled and don’t run concurrently with
themselves.
If you set up pg_cron on a hot standby, then it will start running the
cron jobs, which are stored in a table and thus replicated to the hot
standby, as soon as the server is promoted. This means your periodic
jobs automatically fail over with your PostgreSQL server.
Source: citusdata.com

Resources