Resque + Sinatra + Heroku how to run the jobs continuously - ruby

I have already setup Redis + Resque and deploy on heroku already. Everything works fine, and the jobs are added to the queue correctly. But it won't be run until I run command
heroku run rake jobs:work
How do I tell heroku to run the jobs in the queue automatically in background?
I'm using Sinatra and not Rails.
Thank you very much.

You need to add a worker process to your application that will automatically run the rake jobs:work process for you continuously.
You can do this via the UI on Heroku.

There is a much better (IMHO) way to do this using IronWorker. Iron.io will basically always be cheaper, and I find the approach easier to set up and use. http://www.iron.io/

Related

Queued jobs are somehow being cached with Laravel Horizon using Supervisor

I have a really strange thing happening with my application that I am really struggling to debug and was wondering if anyone had any ideas or similar experiences.
I have an application running on Laravel v5.8 which is using Horizon to run the queued jobs on a Ubuntu 16.04 server. I have a feature that archives an account which is passed off to the queue.
I noticed that it didn't seem to be working, despite working locally and having had the tests passing for the feature.
My last attempt to debug was me commenting out the entire handle method and added Log::info('wtf?!'); to see if even that would work which it didn't, in fact, it was still trying to run the commented out code. I decided to restart supervisor and tried again. At last, I managed to get 'wtf?!' written to my logs.
I have since been unable to deploy my code without having to restart supervisor in order for it to recognise the 'new' code.
Does Horizon cache the jobs in any way? I can't see anything in the documentation.
Has anyone experienced anything like this?
Any ideas on how I can stop having to restart supervisor every time?
Thanks
As stated in the documentation here
Remember, queue workers are long-lived processes and store the booted application state in memory. As a result, they will not notice changes in your code base after they have been started. So, during your deployment process, be sure to restart your queue workers.
Alternatively, you may run the queue:listen command. When using the queue:listen command, you don't have to manually restart the worker after your code is changed; however, this command is not as efficient as queue:work:
And as stated here in the Horizon documentation.
If you are deploying Horizon to a live server, you should configure a process monitor to monitor the php artisan horizon command and restart it if it quits unexpectedly. When deploying fresh code to your server, you will need to instruct the master Horizon process to terminate so it can be restarted by your process monitor and receive your code changes
When you restart supervisor, you are basically restarting the command and loading the new code, your behaviour is exactly as expected to be.

Capistrano intervals between restarts of the servers to ensure service continuity

I know the best practice for pushing changes to production is to have sets of servers A and B, to have A serve the website for the client, to push the update on B and then switch A<->B to ensure the continuity of services. But this feel kinda hard to implmeent with Capistrano (?)
I currently have a pool of autoscaled servers on the Amazon cloud. Using capistrano, my deploy command will deploy the update on all the servers, and restart them all at the same time. During the period when passenger restarts itself, there is a downtime on my production server (and restarting can take up to 10 sec so it's a problem).
In order to avoid this, I'd like to restart my servers one at a time, and to wait x seconds before restarting the next server (I don't mind if I have 2 different versions of the code online, the target scenario I have in mind, is deploying a small hotfix)
Is there a way to override Capistrano restart task so as to wait some time before starting the command on the next server ?
This is built in:
on :all, in: :sequence, wait: 15 do
# Your restart task
end
I am actually using Capistrano-Passenger for restarting my server, and I just noticed there's a set :passenger_restart_wait, 5 command which seems to do that already !
From the gem readme

Run Go script inside Docker Container or cron job?

I have Go application deployed over Docker. Other than running the main program, I want to run periodic job for updating my data.
Which is better?
Run periodic job using concurrency (channel) while being run on main program.
Crontab to register periodic job on system. But I don't know how to do this inside Docker
In Dockerfile or in docker what is the best way to run a separate cronjob?
Please help me. Thanks!
If you are developing the application and all you need is basic periodical execution of one "job" , I would go and implement it in your app. If things get more complicated I would build on an image such as https://github.com/phusion/baseimage-docker which brings support for management of multiple container processes (including cron).

Resque error when running on my local machine

When I am running the project locally and have not got resque running I will get an error message when using enqueue. I understand that totally as the resque server is not running.
Is there a way to catch that error instead so that I can display it as a flash error message instead of halting execution.
What I usually do is do the jobs with resque when Rails.env == production or staging. In development, the jobs are done directly.

run Ruby/EventMachine script as a system service

I've written a simple UDP server in Ruby using EventMachine. I'd like to keep it always running on my Linux box. Suggestions on how to wrap it up as a system service or in some other form that launches at start-up, stays running, can be monitored?
As you are on linux you can use daemons gem
http://daemons.rubyforge.org/
http://railscasts.com/episodes/129-custom-daemon
The Thin webserver, which is built on top of EventMachine, uses the daemons gem: https://github.com/macournoyer/thin/blob/master/lib/thin/daemonizing.rb
To keep it running, use Monit, which can be configured to check that the process is running, restart it if it's not, or restart if it starts using too much system resources, or an endless array of other possible conditions.
I would use Cron with [#restart][1]. A well behaved daemon should check if its already running before running again.
[1]: https://help.ubuntu.com/community/CronHowto#Advanced Crontab
All these answers are outdated. Ruby has perfect Process.daemon method: http://www.ruby-doc.org/core-2.1.0/Process.html#method-c-daemon
Just add Process.daemon in your application before EM.run and all should work.

Resources