Laravel Envoyer Deployment Hook Install Composer Dependencies Failed - composer-php

We are using for Laravel Deployment the Envoyer platform.
We have the problem that the Deployment Hook "Install Composer Dependencies" stops at 600 Seconds on one server.
So the Deployment does not go through since days.
I found the possibility to set a config param "process-timeout" in the composer.json but nothing happens. That is not considered.
I have no idea where I should still look to change this value or increase.
Can someone help me?

I asked this from envoyer support and they said they have no plans to change the 600s timeout. You should optimize your deployments

Related

Api-Platform "No operations defined in spec!" in Symfony 5.1 dev system

I am using Symfony 5.1.8 in an existing project and installed Api Platform, version 2.5.7:
composer req api
I added an #ApiResource() Annotation to one of my entity classes.
When calling the /api/ route there is always just a message saying "No operations defined in spec!". The problem does only occur on my dev system (php 4.7.11, macOS Catalina 10.15.6, xdebug... ). So I do not think it's a configuration problem...
When I deploy this to my testsystem (debian with docker containers) everything works as expected - there are shown 6 resources I can interact with.
I tried to update my composer dependencies, clear the cache several times, clear the cache folder... nothing of this helped.
When calling
bin/console debug:router
on my test system, I get all 6 resources. In my dev system there are no routes shown.
Do you have any ideas where to start debugging to better understand the problem?
What are further interesting details?
Edit:
It works in the dev system when changing the environment to "test". But I do still not have a clue, why...
regards
Stephan
I found the answer.
I am using redis for system cache, so the deletion of the cache folder (./var/cache) didn't help.
A normal cache:clear didn't help, too:
bin/console cache:clear
But a cache clear of the system cache did the trick:
bin/console cache:pool:clear cache.system_clearer

How can I add basic authentication to the MailHog service in DDEV-Local

I have an unusual setup where I want to provide some authentication on the MailHog feature of DDEV-Local. How can I add basic authentication?
Since it turns out that MailHog supports basic auth and DDEV-Local provides the ability to add extra files into the container at build time, you can do this (updated for DDEV v1.19.0):
Add these four files to .ddev/web-build in your DDEV-Local project:
mailhog.conf:
[program:mailhog]
command=/usr/local/bin/mailhog -auth-file=/etc/mailhog-auth.txt
autorestart=true
startretries=10
mailhog-auth.txt:
test:$2a$04$qxRo.ftFoNep7ld/5jfKtuBTnGqff/fZVyj53mUC5sVf9dtDLAi/S
Dockerfile:
ARG BASE_IMAGE
FROM $BASE_IMAGE
ADD mailhog-auth.txt /etc
ADD mailhog.conf /etc/supervisor/conf.d
ADD healthcheck.sh /
healthcheck.sh: (See gist - it's a little long to quote here.)
Now you can ddev start and the mailhog auth with be "test":"test". The MailHog auth page gives more detail about how to generate a better password, and it will just go into mailhog-auth.txt.
As a followup to this issue, after an upgrade to DDEV v1.19.0 a project with basic auth configured for MailHog using the instructions listed here resulted in a project that would no longer spin up. It was only after finally deleting the entire DDEV setup for the project and systematically implementing one customization at a time that we finally isolated this as a problem. Needless to say, it was an afternoon wasted.
Not sure what changed with the web image for v1.19.0 but this solution, which was working fine in DDEV v1.18.2 now no longer works.
Leaving this for anyone else who may be wrestling with the same issue.

Critical Caching issue in Laravel Application (AWS Server)

I am facing a critical issue in my application, it is developed in Laravel and Angular. The issue is I am getting the old email templates on live site and on local server I am getting the latest updated one. The process to deloy the code is automatic, I just to commit the code in BitBucket and after that Bitbucket Pipleline push the code to AWS server directly.
I have already run the cache cammands for Laravel and restarted the jobs but still i am getting the same issue. If anyone have expirienced the same issue or have knowledge of the same to resolve, Please guide!
I think you can try one of the following ways to overcome the issue, I faced a similar issue and resolved it by following ways -
Try deleting the cache files manually from Laravel from storage/framework/views
Upload the code directly into AWS for particular module without using the pipeline way
restart your server
This will surely resolve your issue!
Since you are using Laravel and angular application deployed on AWS,
I assume that bit bucket is pushing code and build commands are fired on every push
there are few things which can help you.
Try to build the angular side on every push, since angular builds hashes all the files in the dist folder
Try to delete the Laravel cached files which are stored in storage/framework/views
Check that on that your server is pointing to the right project folder
If any of the points from 1 or 2 works you can automate the process by passing CLI command after every push,
Point 1 and 2 are achievable by passing CLI commands.

Laravel on kubernetes - slow composer vendor autoload (production)

We have set up a kubernetes cluster for our laravel application on google cloud platform.
Containers:
application code + php-fpm
apache2
others not related to the issue
(We run under nginx-ingress-controller but this seems unrelated to the issue)
We run a jmeter stress tests on a simple laravel route that returns "ok" and we noticed terrible response times.
Afterwards we run the same test on an index2.php (inside public dir το slide over the framework) which just returns 'ok'.
And we got this result(!):
After digging we found out that the composer's autoloading stuff cause this slowness.
Any advice on how this could be resolved will be highly appreciated.
Thanks
Ok. We found out that we had no opcache enabled.
As documented about composer optimize-autoloader:
On PHP 5.6+, the class map is also cached in opcache which improves the initialization time greatly. If you make sure opcache is enabled, then the class map should load almost instantly and then class loading is fast.

I am unable to deploy phoenix app to heroku because of failed dependency (called coherence) compilation, how to make it work?

So to start I made an Elixir application using Phoenix framework.
This application uses coherence dependency for authentication to the website. This dependency was installed as it is advised on the git repo with -full argument to install all the options coherence has.
Then, I did just change a couple of lines in config.exs file of my project to use mailgun service for mailing and put credentials over there.
Next, I installed and configured my other deps (they have nothing to do with coherence).
Locally, my application could compile and run without problems.
Then, I wanted to deploy it to Heroku using Phoenix guidelines.
When I completed all the steps, I got an error when trying to push the application to Heroku.
I then tried to check the file lib/mix/tasks/coherence.clean.ex and the line 162 where I found a comment that said there is an error with updating a config file, but I couldn't figure out what that means and how to solve that.
I tried to make a fresh phoenix application, installing coherence with the same or different options and afterward deploying it following the Phoenix guidelines. Every time I was getting the same error.
I also want to note that I did try to create elixir_buildpack.config file and putting always_rebuild=true there and had no success. (it is a solution mentioned in troubleshooting section of deploying to Heroku guide)
So, my question is, what do I need to change in my config.exs file (or elsewhere) in order to make at least a fresh application with coherence installed to compile and work on Heroku?
useful links:
coherence dep github link
Thanks a ton guys.
The Heroku Buildpack for Elixir currently defaults to Elixir 1.2.6 while the code that throws that error uses the else syntax with with, a feature that was added in Elixir 1.3.0, so you need to set the Elixir version to use to 1.3.0 or later by adding the following to elixir_buildpack.config:
elixir_version=1.3.2

Resources