I am using beanstalkd in a Laravel project to handle jobs on a queue. Beanstalkd is running locally. What I want to do is add one or more remote servers to handle some jobs when the queue gets bigger. I know that with Laravel I can send a job to a specific remote connection but in this way I don't know the load in each server prior to sending the job.
I was wondering if beanstalkd supports load balancing between servers and error handling when a remote job fails for example.
Thank you
Beanstalkd does't have features for load balancing.
You could setup a HAProxy on your balancer and signup multiple servers with beanstalkd installed. Then when you send jobs from Laravel code you send to the HAProxy, and HAProxy decides on which sub-server puts the job, as it knows the loading and if there is an incident with a sub system.
In the code you just need to change the IP.
In your infrastructure you need to have balancer (HAProxy) that is setup with a pool of Beanstalkd servers.
We usually have 2 machines, and they are configured like this:
- Machine 1: HAProxy, Apache, MySQL, Laravel, Beanstalkd
- Machine 2: MySQL, Laravel, Beanstalkd
Related
I have a Laravel app running locally using ./vendor/bin/sail up. I also have a trivial NodeJS server (running locally as well) that waits 60 seconds on each request and returns dummy data. The Laravel app makes a request to the Node app and becomes unresponsive to client requests until the 60 seconds are up.
Is this a limitation of the Laravel dev server? Is there a setting I'm missing?
Answering my own question.
Laravel uses php artisan serve underneath sail, which in turn uses the built-in server, which by default "runs only one single-threaded process."
However, "You can configure the built-in webserver to fork multiple workers in order to test code that requires multiple concurrent requests to the built-in webserver. Set the PHP_CLI_SERVER_WORKERS environment variable to the number of desired workers before starting the server. This is not supported on Windows."
Adding PHP_CLI_SERVER_WORKERS=5 to my .env file fixed the issue.
I use laravel homestead to provision my server on my virtualbox which is vagrant. I currently use the default queue to run my jobs but after sending to a new queue it just does not get picked up probably because I don't have it set up on homestead yet. How do I set up multiple queues on laravel homestead ?
As mentioned in the documentation :
Since queue workers are long-lived processes, they will not pick up
changes to your code without being restarted. So, the simplest way to
deploy an application using queue workers is to restart the workers
during your deployment process.
to pick up your new queue, connect to your server via ssh and restart the queue with:
php artisan queue:restart
for more information check the doc queue-workers
So I want to run two socket.io Server / Laravel Echo Server on one physical server. On this server there is a Virtual Host for Production and another one for testing purposes. Now they both need a connection to a Websocket running on the same server.
Is this possible without broadcasting Events to the wrong instance? Or should I run a second instance of the Laravel Echo Server on another port an let both Environments connect on different Socket.io servers?
Is there a common approach?
Our PHP application runs on apache, however php-pm can be used make the application persistent (no warmup, database connection remains open, caches are populated and application initialized).
I'd like to keep this behaviour when deploying to heroku, that is have apache serve static content and php-pm server the API layer. Locally this is handled using an .htaccess Rewrite proxy rule that sends all traffic from /middleware to php-pm listing on e.g. port 8082.
On heroku it would mean running two processes in the web dyno. Is that possible?
If not- are there other alternatives that can be used to handle web traffic through different processes or make a persistent process listen to web traffic?
I spun up a Mesosphere cluster on Digital Ocean (development) and it's not allowing me to allow external (non vpn) connections to containers or apps. How can this be solved ?
To ensure that the world doesn't have access to your cluster normally, there have been iptables rules installed. By default, these allow full access inside the cluster and nothing externally.
If you're interested in running real applications, I'd recommend the following:
Put HAProxy on a single node.
Setup the haproxy-marathon-bridge script.
On the same box that you installed HAProxy on, setup iptables to allow access to the port that HAProxy is listening on.
By doing this, you'll have a single place to refer to when giving access to applications running on your Mesos cluster. No matter where the app or container is scheduled (with marathon), you'll always be able to reach it via. haproxy.