Laravel on kubernetes - slow composer vendor autoload (production) - laravel

We have set up a kubernetes cluster for our laravel application on google cloud platform.
Containers:
application code + php-fpm
apache2
others not related to the issue
(We run under nginx-ingress-controller but this seems unrelated to the issue)
We run a jmeter stress tests on a simple laravel route that returns "ok" and we noticed terrible response times.
Afterwards we run the same test on an index2.php (inside public dir το slide over the framework) which just returns 'ok'.
And we got this result(!):
After digging we found out that the composer's autoloading stuff cause this slowness.
Any advice on how this could be resolved will be highly appreciated.
Thanks

Ok. We found out that we had no opcache enabled.
As documented about composer optimize-autoloader:
On PHP 5.6+, the class map is also cached in opcache which improves the initialization time greatly. If you make sure opcache is enabled, then the class map should load almost instantly and then class loading is fast.

Related

Api-Platform "No operations defined in spec!" in Symfony 5.1 dev system

I am using Symfony 5.1.8 in an existing project and installed Api Platform, version 2.5.7:
composer req api
I added an #ApiResource() Annotation to one of my entity classes.
When calling the /api/ route there is always just a message saying "No operations defined in spec!". The problem does only occur on my dev system (php 4.7.11, macOS Catalina 10.15.6, xdebug... ). So I do not think it's a configuration problem...
When I deploy this to my testsystem (debian with docker containers) everything works as expected - there are shown 6 resources I can interact with.
I tried to update my composer dependencies, clear the cache several times, clear the cache folder... nothing of this helped.
When calling
bin/console debug:router
on my test system, I get all 6 resources. In my dev system there are no routes shown.
Do you have any ideas where to start debugging to better understand the problem?
What are further interesting details?
Edit:
It works in the dev system when changing the environment to "test". But I do still not have a clue, why...
regards
Stephan
I found the answer.
I am using redis for system cache, so the deletion of the cache folder (./var/cache) didn't help.
A normal cache:clear didn't help, too:
bin/console cache:clear
But a cache clear of the system cache did the trick:
bin/console cache:pool:clear cache.system_clearer

How can I add basic authentication to the MailHog service in DDEV-Local

I have an unusual setup where I want to provide some authentication on the MailHog feature of DDEV-Local. How can I add basic authentication?
Since it turns out that MailHog supports basic auth and DDEV-Local provides the ability to add extra files into the container at build time, you can do this (updated for DDEV v1.19.0):
Add these four files to .ddev/web-build in your DDEV-Local project:
mailhog.conf:
[program:mailhog]
command=/usr/local/bin/mailhog -auth-file=/etc/mailhog-auth.txt
autorestart=true
startretries=10
mailhog-auth.txt:
test:$2a$04$qxRo.ftFoNep7ld/5jfKtuBTnGqff/fZVyj53mUC5sVf9dtDLAi/S
Dockerfile:
ARG BASE_IMAGE
FROM $BASE_IMAGE
ADD mailhog-auth.txt /etc
ADD mailhog.conf /etc/supervisor/conf.d
ADD healthcheck.sh /
healthcheck.sh: (See gist - it's a little long to quote here.)
Now you can ddev start and the mailhog auth with be "test":"test". The MailHog auth page gives more detail about how to generate a better password, and it will just go into mailhog-auth.txt.
As a followup to this issue, after an upgrade to DDEV v1.19.0 a project with basic auth configured for MailHog using the instructions listed here resulted in a project that would no longer spin up. It was only after finally deleting the entire DDEV setup for the project and systematically implementing one customization at a time that we finally isolated this as a problem. Needless to say, it was an afternoon wasted.
Not sure what changed with the web image for v1.19.0 but this solution, which was working fine in DDEV v1.18.2 now no longer works.
Leaving this for anyone else who may be wrestling with the same issue.

Why don't my Laravel 5.2 events work on production when they work locally?

Locally, my Laravel 5.2 project works well, including events that are queued using Redis.
IMPORTANT UPDATE: I later discovered that this premise was incorrect (and my events don't use Redis), and so I'd accidentally posted this question in a misleading way. I hope my ridiculously long struggle and my answer below will be helpful to someone else who is an events newbie too.
But I've deployed my project to a production server (where I'm using a Laradock Docker setup).
There, on production, Redis works for caching and for delayed dispatching of jobs.
So I know that my Redis setup is good.
But events don't work (even though they worked when my project was on my local computer).
My question is not a duplicate of Laravel 5.2 event not firing in production because I'm not using broadcasting and because I am using Laradock.
I've also already tried these commands (inside the container at docker exec -it laradock_workspace_1 bash):
php artisan config:cache
php artisan clear-compiled
php artisan optimize
composer install --no-dev
composer dumpautoload
php artisan queue:restart
My events are working on production now. Here is what I learned:
I'd read https://laravel.com/docs/5.2/events many times, but I don't know where/why I got the idea that "events" (which were a new concept to me) relied on Redis or cron jobs. So the entire premise of my question above was wrong! I was not using Illuminate\Contracts\Queue\ShouldQueue, so everything was synchronous and should have been more straightforward than I was thinking.
I think this tip about composer dumpautoload and php artisan clear-compiled was helpful (for after each edit of files on production).
The main issue seemed to be that a certain database table of mine seemed to have records with weird values, and those records were being checked by the event, and that's where it was all breaking.
And I think these records probably got into that corrupted state because on production I don't think I'd started the cronjobs and workers immediately upon deployment.
My local environment was working because its table didn't have these corrupted records.
Hopefully my naive and misleading question (which led to this ridiculously long struggle and this answer) will be helpful to someone else who is an events newbie too.

Laravel Envoyer Deployment Hook Install Composer Dependencies Failed

We are using for Laravel Deployment the Envoyer platform.
We have the problem that the Deployment Hook "Install Composer Dependencies" stops at 600 Seconds on one server.
So the Deployment does not go through since days.
I found the possibility to set a config param "process-timeout" in the composer.json but nothing happens. That is not considered.
I have no idea where I should still look to change this value or increase.
Can someone help me?
I asked this from envoyer support and they said they have no plans to change the 600s timeout. You should optimize your deployments

Symfony framework on windows azure cloud

is it possible to run symfony (1.4) on the windows azure cloud?
The two things I'm wondering is how to execute the symfony tasks and where will symfony save the cache files (blob storage?).
Thanks for your answers.
PHP is something that Microsoft are taking very seriously these days so yes, Symfony can run on top of Azure although documentation is sparse as most people stick to Linux servers.
Regarding tasks, there is a tool for running command line tasks on Windows Azure although I have not yet tried it myself.
http://azurephptools.codeplex.com/
In the mean time I got symfony 1.4 running inside the WindowsAzure Cloud. It was not as hard as expected. I was also able to write a blob storage caching for symfony. Session handling works ok, but you need to modify the symfony session handler to work correctly with more than one server instances.

Resources