Resque error when running on my local machine - ruby-on-rails-3.1

When I am running the project locally and have not got resque running I will get an error message when using enqueue. I understand that totally as the resque server is not running.
Is there a way to catch that error instead so that I can display it as a flash error message instead of halting execution.

What I usually do is do the jobs with resque when Rails.env == production or staging. In development, the jobs are done directly.

Related

Queued jobs are somehow being cached with Laravel Horizon using Supervisor

I have a really strange thing happening with my application that I am really struggling to debug and was wondering if anyone had any ideas or similar experiences.
I have an application running on Laravel v5.8 which is using Horizon to run the queued jobs on a Ubuntu 16.04 server. I have a feature that archives an account which is passed off to the queue.
I noticed that it didn't seem to be working, despite working locally and having had the tests passing for the feature.
My last attempt to debug was me commenting out the entire handle method and added Log::info('wtf?!'); to see if even that would work which it didn't, in fact, it was still trying to run the commented out code. I decided to restart supervisor and tried again. At last, I managed to get 'wtf?!' written to my logs.
I have since been unable to deploy my code without having to restart supervisor in order for it to recognise the 'new' code.
Does Horizon cache the jobs in any way? I can't see anything in the documentation.
Has anyone experienced anything like this?
Any ideas on how I can stop having to restart supervisor every time?
Thanks
As stated in the documentation here
Remember, queue workers are long-lived processes and store the booted application state in memory. As a result, they will not notice changes in your code base after they have been started. So, during your deployment process, be sure to restart your queue workers.
Alternatively, you may run the queue:listen command. When using the queue:listen command, you don't have to manually restart the worker after your code is changed; however, this command is not as efficient as queue:work:
And as stated here in the Horizon documentation.
If you are deploying Horizon to a live server, you should configure a process monitor to monitor the php artisan horizon command and restart it if it quits unexpectedly. When deploying fresh code to your server, you will need to instruct the master Horizon process to terminate so it can be restarted by your process monitor and receive your code changes
When you restart supervisor, you are basically restarting the command and loading the new code, your behaviour is exactly as expected to be.

Apache Flink on Windows

First, I am a complete newbie with Flink. I have installed Apache Flink on Windows.
I start Flink with start-cluster.bat. It prints out
Starting a local cluster with one JobManager process and one
TaskManager process. You can terminate the processes via CTRL-C in the
spawned shell windows. Web interface by default on
http://localhost:8081/.
Anyway, when I submit the job, I have a bunch of messages:
DEBUG org.apache.flink.runtime.rest.RestClient - Received response
{"status":{"id":"IN_PROGRESS"}}.
In the log in the web UI at http://localhost:8081/, I see:
2019-02-15 16:04:23.571 [flink-akka.actor.default-dispatcher-4] WARN
akka.remote.ReliableDeliverySupervisor
flink-akka.remote.default-remote-dispatcher-6 - Association with
remote system [akka.tcp://flink#127.0.0.1:56404] has failed, address
is now gated for [50] ms. Reason: [Disassociated]
If I go to the Task Manager tab, it is empty.
I tried to find if any port needed by flink was in use but it does not seem to be the case.
Any idea to solve this?
So I was running Flink locally using Intelij
Using ArchType that gives you ready to go examples
https://ci.apache.org/projects/flink/flink-docs-stable/dev/projectsetup/java_api_quickstart.html
You not necessary have to install it unless you are using Flink as a service on cluster.
Code editor will compile it just fine for spot instance of Flink for one code run.

Laravel Horizon inactive and still processing

I run my Application on Kubernetes.
I have one Service for requests and one service for the worker processes.
If I access the Horizon UI it often shows the Inactive Status, but there are still jobs being processed by the worker. I know this because the JOBS PAST HOUR are getting more.
If I scale up my worker service there will be constantly "failing" Jobs with this exception Illuminate\Queue\MaxAttemptsExceededException.
If I connect directly to the pods and run ps aux I will see that there are horizon instances running.
If I connect to a pod on which the worker is running and execute the horizon:list command it tells me that one (or multiple) Masters are running.
How can I further debug this?
Laravel version: 5.7.15
Horizon version: 2.0.0
Redis version: 3.2.4
The issue was that the Server Time was out of Sync so the "old" ones got restartet all the time

Heroku Redis max memory error

My production environment has started constantly throwing this error:
Error fetching message: ERR Error running script (call to f_0ab965f5b2899a2dec38dec41fff8c97f7a33ee9): #user_script:56: #user_script: 56: -OOM command not allowed when used memory > 'maxmemory'.
I am using the Heroku Redis addon with a worker dyno running Sidekiq.
Both Redis and the Worker Dyno have plenty of memory right now and the logs don't show them running out.
What is causing this error to be thrown and how can I fix it?
I had a job that required more memory than I had available in order to run.
Run "config get maxmemory" on your redis server. Maybe that config is limiting the amount of memory Redis is using.

Resque + Sinatra + Heroku how to run the jobs continuously

I have already setup Redis + Resque and deploy on heroku already. Everything works fine, and the jobs are added to the queue correctly. But it won't be run until I run command
heroku run rake jobs:work
How do I tell heroku to run the jobs in the queue automatically in background?
I'm using Sinatra and not Rails.
Thank you very much.
You need to add a worker process to your application that will automatically run the rake jobs:work process for you continuously.
You can do this via the UI on Heroku.
There is a much better (IMHO) way to do this using IronWorker. Iron.io will basically always be cheaper, and I find the approach easier to set up and use. http://www.iron.io/

Resources