Is it possible to somehow view sidekiq completed job list - for example, find all PurchaseWorkers with params (1)? Yesterday in my app delayed method that was supposed to run didn't and associated entity (lets say 'purchase') got stuck in limbo with state "processing". I am trying to understand whats the reason: job wasn't en-queued at all or was en-queued but for some reason exited unexpectedly. There were no errors in sidekiq log.
Thanks.
This is old but I wanted to see the same thing since I'm not sure if jobs I scheduled ran or not!
Turns out, Sidekiq doesn't have anything built in to see jobs that completed and still doesn't seem to.
If it err'd and never completes it should be in the 'dead' queue. But to check that something actually ran seems to be beyond Sidekiq by default.
The FAQ suggests installing 3rd party plugins to track and log information: https://github.com/mperham/sidekiq/wiki/FAQ#how-can-i-tell-when-a-job-has-finished One of them allows for having a callback to do follow up (maybe add a record for completed jobs elsewhere?)
You can also setup Sidekiq to log to somewhere other than STDOUT (default) so you can output log information about your jobs. In this case, logging that it's complete or catching errors if for some reason it is never landing in the retrying or dead jobs queue when there is a problem. See https://github.com/mperham/sidekiq/wiki/Logging
To see jobs still in queue you can use the Rails console and look at the queue by queue name https://www.rubydoc.info/gems/sidekiq/Sidekiq/Queue
One option is the default stats provided by sidekiq - https://github.com/mperham/sidekiq/wiki/Monitoring#using-the-built-in-dashboard
The best options is to use the Web UI provided here - https://github.com/mperham/sidekiq/wiki/Monitoring#web-ui
Related
I setup a Mesos cluster which runs Apache Aurora framework, and i registered 100 cron jobs which run every min on a 5 slave machine pool. I found after scheduled 100 times, the cron jobs stacked in "PENDING" state. May i ask what kind of logs i can inspect and what is the possible problem ?
It could be a couple of things:
Do you still have sufficient resources in your cluster?
Are those resources offered to Aurora? Or maybe only to another framework?
Do you have any task constraints that prevent your tasks from being scheduled?
Possible information source:
What does the tooltip or the expanded status say on the UI? (as shown in the screenshot)
The Aurora scheduler has log files. However normally those are not needed for an end user to figure out why stuff is stuck in pending.
In case you are stuck here, it would probably be the best to drop by in the #aurora IRC channel on freenode.
My app runs on Heroku with unicorn and uses sucker_punch to send a small quantity of emails in the background without slowing the web UI. This has been working pretty well for a few weeks.
I changed the unicorn config to the Heroku recommended config. The recommended config
includes an option for the number of unicorn processes and I upped the number of processes from 2 to 3.
Apparently that was too much. The sucker_punch jobs stopped running. I have log messages that indicate when they are queued and I have messages that indicate when they start processing. The log shows them being queued but the processing never starts.
My theory is that I exceeded memory by going from 2 to 3 unicorns.
I did not find a message anywhere indicating a problem.
Q1: should I expect to find a failure messsage somewhere? Something like "attempting to start sucker_punch -- oops, not enough memory"?
Q2: Any suggestions on how I can be notified of a failure like this in the future.
Thanks.
If you are indeed exceeding dyno memory, you should find R14 or R15 errors in your logs. See https://devcenter.heroku.com/articles/error-codes#r14-memory-quota-exceeded
A more likely problem, though, given that you haven't found these errors, is that something within the perform method of your sucker punch worker is throwing an exception. I've found sucker punch tasks to be a pain to debug because it appears the lib swallows all exceptions silently. Try instantiating your task and calling perform on it from a rails console to make sure that it behaves as you expect.
For example, you should be able to do this without causing an exception:
task = YourTask.new
task.perform :something, 55
I am running delayed_job for a few background services, all of which, until recently, run in isolation e.g. send an email, write a report etc.
I now have a need for one delayed_job, as its last step, to lodge another delayed_job.
delay.deploy() - when delayed_job runs this, it triggers a deploy action, the last step of which is to ...
delay.update_status() - when delayed_job runs this job, it will check the status of the deploy we started. If the deploy is still progressing, we call delay.update_status() again, if the deploy has stopped we write the final deploy status to a db record.
Step 1 works fine - after 5 seconds, delayed_job fires up the deploy, which starts the deployment, and then calls delay.update_status().
But here,
instead of update_status() starting up in 5 seconds, delayed_job goes into a busy loop, firing of a bunch of update_status calls, and looping really hard without pause.
I can see the logs filling up with all these calls, the server slows down, until the end-condition for update_status is reached (deploy has eventually succeeded or failed), and things get quiet again.
Am I using Delayed_Job::delay() incorrectly, am I missing a basic tenent of this use-case ?
OK it turns out this is "expected behaviour" - if you are already in the code running for a delayed_job, and you call .delay() again, without specifying a delay, it will run immediately. You need to add the parameter run_at:
delay(queue: :deploy, run_at: 10.seconds.from_now).check_status
See the discussion in google groups
I created some workers with coffee-resque and was trying to view workers using the ruby version of resque-web and only saw intermittent workers flash in and out.
I noticed that coffee-resque untracks workers while paused. Is that the intended behavior? This made it so that resque web only listed flashing intermittent workers and they always had a status of waiting when they did appear, even though that was when they were processing.
Am I doing it wrong or is there a suggested way of monitoring the worker queues?
Also, is there a way to clean up the inactive orphaned worker keys in redis if the worker process failed and didn't do a graceful untrack on exit?
I recently provided a pull request that fixed this issue. It has been accepted into coffee-resque and a new version was released.
https://github.com/technoweenie/coffee-resque/issues/17
This fix was released as 0.1.6.
My server process is basically an API that responds to REST requests.
Some of these requests are for starting long running tasks.
Is it a bad idea to do something like this?
get "/crawl_the_web" do
Thread.new do
Crawler.new # this will take many many days to complete
end
end
get "/status" do
"going well" # this can be run while there are active Crawler threads
end
The server won't be handling more than 1000 requests a day.
Not the best idea....
Use a background job runner to run jobs.
POST /crawl_the_web should simply add a job to the job queue. The background job runner will periodically check for new jobs on the queue and execute them in order.
You can use, for example, delayed_job for this, setting up a single separate process to poll for and run the jobs. If you are on Heroku, you can use the delayed_job feature to run the jobs in a separate background worker/dyno.
If you do this, how are you planning to stop/restart your sinatra app? When you finally deploy your app, your application is probably going to be served by unicorn, passenger/mod_rails, etc. Unicorn will manage the lifecycle of its child processes and it would have no knowledge of these long-running threads that you might have launched and that's a problem.
As someone suggested above, use delayed_job, resque or any other queue-based system to run background jobs. You get persistence of the jobs, you get horizontal scalability (just launch more workers on more nodes), etc.
Starting threads during request processing is a bad idea.
Besides that you cannot control your worker threads (start/stop them in a controlled way), you'll quickly get into troubles if you start a thread inside request processing. Think about what happens - the request ends and the process gets prepared to serve the next request, while your worker thread still runs and accesses process-global resources like the database connection, open files, same class variables and global variables and so on. Sooner or later, your worker thread (or any library used from it) will affect the main thread somehow and break other requests and it will be almost impossible to debug.
You're really better off using separate worker processes. delayed_job for example is a really small dependency and easy to use.