Can multiple clients connect to a Laravel dev server simultaneously? - laravel

I have a Laravel app running locally using ./vendor/bin/sail up. I also have a trivial NodeJS server (running locally as well) that waits 60 seconds on each request and returns dummy data. The Laravel app makes a request to the Node app and becomes unresponsive to client requests until the 60 seconds are up.
Is this a limitation of the Laravel dev server? Is there a setting I'm missing?

Answering my own question.
Laravel uses php artisan serve underneath sail, which in turn uses the built-in server, which by default "runs only one single-threaded process."
However, "You can configure the built-in webserver to fork multiple workers in order to test code that requires multiple concurrent requests to the built-in webserver. Set the PHP_CLI_SERVER_WORKERS environment variable to the number of desired workers before starting the server. This is not supported on Windows."
Adding PHP_CLI_SERVER_WORKERS=5 to my .env file fixed the issue.

Related

laravel production in windows server

im new in laravel and i have done my first project, I have xampp,mysql and windows server 2012 (installed with composer and laravel 7). i'm running the application in server via command below
php artisan serve --host=[ip] --port=[port]
application is running fine within the network. However, it seems super slow. I'm not sure if this the correct way of deploying it into production. Do you have any recommendation/instructions where can i run/deploy apps within the resource that I have.
Speed of your laravel project depends on many parameters like hardware config (CPU,RAM,...) and also number of users and level of requests and processes. But consider that for an app in production it's better to not use php artisan serve command. It's only for development environment. I don't that how much you know about linux but I suggest you to change your stack and use linux as your production os with nginx as web server alongside php-fpm for running your app in production. But for now you should config xmapp's apache web server to run your application from project's public folder. Here is a link you can use: https://insidert.medium.com/setting-up-laravel-project-on-windows-2aa7e4f080da
good luck

Is running multiple web: processes possible?

Our PHP application runs on apache, however php-pm can be used make the application persistent (no warmup, database connection remains open, caches are populated and application initialized).
I'd like to keep this behaviour when deploying to heroku, that is have apache serve static content and php-pm server the API layer. Locally this is handled using an .htaccess Rewrite proxy rule that sends all traffic from /middleware to php-pm listing on e.g. port 8082.
On heroku it would mean running two processes in the web dyno. Is that possible?
If not- are there other alternatives that can be used to handle web traffic through different processes or make a persistent process listen to web traffic?

AJAX requests not working properly with gunicorn when logged out from server

I've made my first django application deploy on server this weekend. It was clear VPS server so I have to install PostgreSQL, PostGIS, set virtualenv and so on. It takes some time to make application works.
On django development server everything works fine, but when I deployed my app with gunicorn (cooperates with Nginx) on VPS server AJAX requests not working properly. I have three AJAX request sending right after the other. And only one or two requests have returned value. So I found gevent and this thread (Django AJAX requests during regular request not going through) and run gunicorn with this command:
gunicorn myapp:wsgi:application --bind 0.0.0.0:9000 -k gevent --worker-connections 1001 --workers=3
and it works. All requests return values and everything looks ok. So I put process to background with and logged out from server.
But everytime when I log out from server the requests stop work. The behavior is the same as firstime without gevent. Could be the problem of activating virtual environment, setting or this is standard behaviour of Ubuntu as server?
I don't even know where should I find solution so I will be glad for any help.
You currently stop the process on logout. You need a process manager that monitors the process. Read the gunicorn docs for a lot of possible solutions.
I suggest you use supervisor. It will make sure gunicorn runs, and restart if it crashes. Install it with sudo apt-get install supervisor
Let's assume you have a website called test, you could use the following test.config (inside /etc/supervisor/conf.d/)
[program:test]
directory=/home/test/www
command=/home/test/commands/start
user=nobody
autostart=true
autorestart=true
redirect_stderr=true
stopsignal=QUIT
stopasgroup=true
killasgroup=true
Where /home/test/www is the location of your django application (you can change it of course), and /home/test/commands/start is a script where you tell gunicorn to run (the command you pasted).

django server replies synchronously to multiple ajax get requests

I have a web page who sends multiple ajax get request to a django server.
The server does some internet crawling for each request it gets and returns a response to the client once it finishes.
It seems that the server replies to the requests one after another.
I'm considering to send each request to a Celery worker to make the server's responses asynchronous but I'm not sure if it will solve the problem.
Also I'm using django on heroku and not sure how to combine celery with django on heroku.
The Heroku Django tutorial app uses gunicorn as the server, as you can see here:
https://github.com/heroku/python-getting-started/blob/master/Procfile
There is no special gunicorn config in the tutorial app, so you are running with gunicorn default settings. You can see what they are here:
http://docs.gunicorn.org/en/stable/settings.html#worker-processes
You'll note that this means you have a single worker process, of the sync type (i.e. no greenlets magic to enable concurrency within the single Python process)
It does say you can scale gunicorn to use multiple processes (still on a single Heroku dyno) by setting the WEB_CONCURRENCY environment variable, on Heroku this is easily done from your local shell using the cli tools:
$ heroku config:set WEB_CONCURRENCY=4
Gunicorn docs suggest setting this to "A positive integer generally in the 2-4 x $(NUM_CORES) range." Your basic Heroku dyno will be a single core.

How do I go about setting up my Sinatra REST API on a server?

I'm an iOS developer primarily. In building my current app, I needed a server that would have a REST API with a couple of GET requests. I spent a little time learning Ruby, and landed on using Sinatra, a simple web framework. I can run my server script, and access it from a browser at localhost:4567, with a request then being localhost:4567/hello, as an example.
Here's where I feel out of my depth. I setup an Ubuntu droplet at DigitalOcean, and felt my way around to setting up all necessary tools via command line, until I could again run my server, now on this droplet.
The problem then is that I couldn't then access my server via droplet.ip.address:4567, and a bit of research lead me to discovering I need Passenger and an Apache HTTP Server to be setup, and not with simple instructions.
I'm way in over my head here, and I don't feel comfortable. There must be a better way for me to take my small group of ruby files and run this on a server, than me doing this. But I have no idea what I'm doing.
Any help or advice would be greatly appreciated.
bit of research lead me to discovering I need Passenger and an Apache HTTP Server to be setup, and not with simple instructions.
Ignore that for now. Take baby steps first. You should be able to run your Sinatra app from the command line on the DigitalOcean droplet, and then access it via droplet.ip.address:4567. If that doesn't work something very fundamental is wrong.
When you start your app, you will see what address and port the app is listening on. Make sure it's 0.0.0.0 and 4567. If it's 127.0.0.1 or localhost that means it will only service requests originating from the same machine
After you get this working, next step is to make your Sinatra app into a service. Essentially this means the app runs in the background, and auto-starts when the system reboots. Look into Supervisor which is very simple configuration to get this running.
Later you can install Apache or Nginx to put in front of your Sinatra app. These are proxies which simply forward requests from port 80 (default HTTP port) to your sinatra app, but can do additional things such as add SSL support, load balancing, custom error pages etc. - all of which you do not need right now.

Resources