Error on Composer before deploying on Bitbucket Pipeline - laravel

We have a CI&CD process that have a dockerfile within for deploying to laravel vapor environments via bitbucket pipeline which consists of 4 basic steps:
Install
Build
Test
Deploy
The interesting point is that, we're passing 3 steps without any problems.
We can run the same command on the 4th step on our local environment and deploy to any environments without any problems.
But when we're trying to deploy it via Bitbucket Pipeline (which was already working 10 days ago but broken now) we're failing with an error message of
In ClassLoader.php line 571:
include(/opt/atlassian/pipelines/agent/build/.vapor/build/app/vendor/compos
er/../composer/composer/src/Composer/Console/GithubActionError.php): Failed
to open stream: No such file or directory
on composer install command.
Our current pipeline configuration:
image: lorisleiva/laravel-docker:8.0
definitions:
steps:
- step: &Install
name: Install
script:
- npm ci
- composer install
- composer dump-autoload
artifacts:
- node_modules/**
- vendor/**
- step: &Build
name: Build
script:
- npm run prod
artifacts:
- public/**
- vendor/**
- step: &Test
name: Test & Lint
script:
- php -d memory_limit=4G vendor/bin/phpstan
- vendor/bin/phplint ./ --exclude=vendor
- vendor/bin/phpunit --coverage-text --colors=never
caches:
node: node_modules
composer: vendor
public: public
pipelines:
branches:
release/test:
- step: *Install
- step: *Build
- step: *Test
- step:
name: Release to Vapor [test]
services:
- docker
script:
- COMMIT_MESSAGE=`git log --format=%B -n 1 $BITBUCKET_COMMIT`
- vendor/bin/vapor deploy test --commit="$BITBUCKET_COMMIT" --message="$COMMIT_MESSAGE"
our test dockerfile for vapor
FROM laravelphp/vapor:php80
COPY . /var/task
and our vapor configuration:
build:
- "COMPOSER_MIRROR_PATH_REPOS=1 composer install --no-dev"
- "php artisan event:cache"
- "npm ci && npm run prod && rm -rf node_modules"
deploy:
- "php artisan migrate"
- "php artisan lighthouse:clear-cache"
Tried to remove composer cache on bitbucket pipeline config.
Read composer cache not working on bitbucket pipeline build and https://github.com/lorisleiva/laravel-docker/issues/67 but still have no idea why it is happening so any help or suggestions are more than welcome.

TLDR: Run rm -rf ./vendor before your composer install before deploying.
Now to our analysis 👇🏼
We run all our tests and deploys in GitLab CI (thanks to #lorisleiva 🤗 ). And we have 3 jobs in 3 stages:
preparation stage runs the "composer" job, which runs composer install --prefer-dist --no-ansi --no-interaction --no-progress --no-scripts
testing stage runs the "phpunit" job
deploy stage runs our Vapor deploy script, which runs COMPOSER_MIRROR_PATH_REPOS=1 composer install --prefer-dist --no-ansi --no-interaction --no-progress --optimize-autoloader --no-dev
So, in the composer job we install our dev dependencies because we need them to test the app. The resulting ./vendor directory gets cached by GitLab's CI system and it's automatically made available for the subsequent stages.
That's great, cause it means we install dependencies once and then we can re use that in the testing and deploying stages. So we "deploy the same thing that we tested"...
But that's actually false, cause for production we don't want to install the development dependencies. That's why we use the --no-dev flag in composer in the deploy stage. Keep in mind, we do need those development dependencies to run phpunit.
And when we run it we see a message like:
Installing dependencies from lock file
Verifying lock file contents can be installed on current platform.
Package operations: 0 installs, 0 updates, 73 removals
That makes sense, we already have access to the cached ./vendor directory from the composer job and now we only need to remove the development dependencies.
That's when things fall apart. I've no idea if this is a bug in composer itself, in a dependency, in our codebase, etc... but composer errors out with the ...GithubActionError.php error when trying to remove the development dependencies. If we remove the --no-dev it works perfectly, but That's A NoNo.
Long story short, our solution is to embrace the fact that composer.lock exists and that this job runs in CI (where the download speed is insanely fast). So we nuke the ./vendor directory running rm -rf ./vendor right before the deployable composer install ... --no-dev.
In the end, I think this is perfectly acceptable.
I'm sure there's a way to tell GitLab to avoid downloading the cached ./vendor directory, or an overall better way to do this. But we've spent way to much time today trying to understand and fix this... so, it's going to stay like this. And, no, it doesn't seen to be related to lorisleiva/laravel-docker:x.x docker images.
I hope this is helpful or at least interesting :)
Please do let me know if anyone finds a better approach.
Source here https://github.com/lorisleiva/laravel-docker/issues/67#issuecomment-1009419913

I'm having the same issue.
Is working fine if I remove on vapor.yaml file " --no-dev" in this line.
- 'COMPOSER_MIRROR_PATH_REPOS=1 composer install'
Of course is not a solution, but maybe it helps to identify the issue.

I was having same issue but finally fixed.
include(/builds/myapp/myapp-api/vendor/composer/../composer/composer/sr
c/Composer/Console/GithubActionError.php): Failed to open stream: No such f
ile or directory
I am using gitlab pipeline with same lorisleiva/laravel-docker:8.0 image. Further investigation i found composer self-update gives Command "self-update" is not defined. so i thought it is about composer.
So i change .gitlab.ci.yml file like this;
- curl -sS https://getcomposer.org/installer | php
- ./composer.phar -V
- ./composer.phar install --prefer-dist --no-ansi --no-interaction --no-progress --no-scripts --ignore-platform-reqs
So i download new composer.phar file and used that instead of default composer command and it is worked.

Related

Cypress dependency issues in Docker container

I am trying to integrate cypress to the bitbucket pipeline. And I am following the official documentation:
- step:
script:
# install dependencies
- npm ci
# run Cypress tests
- npm run e2e (env variables here)
I launch the container locally as follows:
docker run -v `pwd`:/mycode -it imagename /bin/sh
cd /mycode
and I run the steps in the script:
/mycode# npm ci; npm run e2e (env variables here)
But I get the following error:
/root/.cache/Cypress/8.2.0/Cypress/Cypress: error while loading shared libraries: libgbm.so.1: cannot open shared object file: No such file or directory
I ran apt-get install xvfb libgtk2.0-0 libnotify-dev libgconf-2-4 libnss3 libxss1 libasound2, as per documentation when I got libgtk2.0-0 missing dependency and it threw the next one.
I also have added :nvm install "lts/*" --reinstall-packages-from="$(nvm current)" as a step to update node to the latest version and match cypress requirements,
but is there any general practice on how to integrate cypress in an existing project's pipeline and to work around these library issues?
Is the fix just to install the library or is there a better integration practice and something I'm missing?
You can actually use an official cypress image only for the step where you want to run the tests. You can choose the version which suits you the best.
- step:
name: run tests
image: cypress/browsers:node12.18.3-chrome87-ff82
script:
# install dependencies
- npm ci
# run Cypress tests
- npm run e2e (env variables here)

Gitlab ci error : bash: composer: command not found

I try to use pipeline with gitlab-ci
This is my gitlab-ci.ymn file
image: docker-laravel-envoy
services:
- mysql:5.7
stages:
- test
unit_test:
stage: test
script:
- composer install --prefer-dist --no-ansi --no-interaction --no-progress --no-scripts
- php artisan key:generate
- php artisan migrate
- vendor/bin/phpunit
And this is the result of the pipeline
How can i choose the rigth image that will install composer
i tried several docker images but still getting the same result
i tried (is there a list of docker images where i can choose the one that corepond to my project)
lorisleiva/laravel-docker:latest
lorisleiva/laravel-docker:7.4
Do someone know where the problem is ?
Gitlab runners
Shell gitlab runner
When i use docker gitlab runner
When i want to use a docker runner i get this issue
Im using a custom instalation of gitlab on my own server
Should i install docker on this server to resolve this issue ?

Travis CI: Build failing because of "uncommitted changes"

We've been using Travis CI for weeks on this project without issue, now suddenly our builds are failing because of "uncommitted changes". I've no idea why.
- Upgrading ramsey/collection (1.1.1 => 1.1.3): Checking out 28a5c4ab2f from cache
- Upgrading brick/math (0.9.1 => 0.9.2): Checking out dff976c2f3 from cache
- Upgrading symfony/translation (v5.2.1 => v5.2.5): Checking out 0947ab1e3a from cache
[RuntimeException]
Source directory /home/travis/build/vendor/nesbot/carbon has uncommitted changes.
What's strange is that our .travis.yml file hasn't changed at all.
language: php
php:
- 7.3
services:
- mysql
before_install:
- mysql -e 'CREATE DATABASE travis_test;'
cache:
directories:
- node_modules
- vendor
before_script:
- cp .env.travis .env
- sudo mysql -e 'create database homestead;'
- composer self-update
- composer install --prefer-source --no-interaction --dev
- php artisan key:generate
- php artisan migrate --no-interaction -vvv
- php artisan import:required-data
script:
- php artisan test
notifications:
email: false
What is causing this "uncommitted changes" error? How can I fix it?
Travis CI stores a cache of your vendor folder. Sometimes if you've performed a composer update locally it can cause some issues with that cached folder. The solution is to clear your caches and force Travis CI to build everything from scratch again.
To do this, go to your project in Travis CI, click More options > Caches. And on that page you can clear all your caches. Then ask Travis CI to rebuild.
It will take a lot longer the first time, but it should clear this error message.

How to cache package manager downloads for docker builds?

If I run composer install from my host, I hit my local composer cache:
- Installing deft/iso3166-utility (1.0.0)
Loading from cache
Yet when building a container having in its Dockerfile:
RUN composer install -n -o --no-dev
I download all the things, e.g.:
- Installing deft/iso3166-utility (1.0.0)
Downloading: 100%
It's expected, yet I like to avoid it. As even on a rebuilt, it would also download everything again.
I would like to have a universal cache for composer that I could also reshare for other docker projects.
I looked into this and found the approach to define a volume in the Dockerfile:
ENV COMPOSER_HOME=/var/composer
VOLUME /var/composer
I added that to my Dockerfile, and expected to only download the files once, and hit the cache afterwards.
Yet when I modify my composer, e.g. remove the -o flag, and rerun docker build ., I expected to hit the cache on build, yet I still download the vendors again.
How are volumes supposed to work to have a data cache inside a docker container?
Use the experimental feature : Docker buildkit (Supported Since docker 18.09, docker-compose 1.25.4)
In your dockerfile
# syntax=docker/dockerfile:experimental
FROM ....
# ......
RUN --mount=type=cache,target=/var/composer composer install -n -o --no-dev
Now before building, make sure the env var is exported:
export DOCKER_BUILDKIT=1
docker build ....
If you are using docker-compose, make sure to export also COMPOSE_DOCKER_CLI_BUILD :
export COMPOSE_DOCKER_CLI_BUILD=1 DOCKER_BUILDKIT=1
docker-compose build ...
If it does not work with docker-compose, make sure your docker-compose version is above 1.25.4
docker-compose version
I found two ways of dealing with this problem, yet none deal with composer volumes anymore.
Fasten composer download process: Use hirak/prestissimo
composer global require "hirak/prestissimo:^0.3"
💡 With Composer 2.0, the above step is no longer required for faster downloads. In fact, it won't install on Composer 2.0 environments.
Force docker to use a cached composer install. Docker uses a cache on a RUN if the added files didn't change. If you only do COPY . /your-php-app, docker build will refresh all the cashes and re-run composer install even if only one unrelated file in the source tree changed. In order to make docker build to run composer install only install on package changes, one has to add composer.json and composer.lock file before adding the source files. Since one also needs the source files anyway, one has to use different folders for composer install and rsync the content back to the then added folder; furthermore one then has to run the post-install scripts manually. It should look something like this (untested):
WORKDIR /tmp/
COPY composer.json composer.lock ./
RUN composer install -n -o --no-dev --no-scripts
WORKDIR /your-php-app/
COPY . /your-php-app/
RUN rsync -ah /tmp/* /your/php-app/
RUN composer run-script post-install-cmd
or combine the two =)
I would like to have a universal cache for composer that I could also reshare for other docker projects.
Using a shared volume for the Composer cache works great when working with containers. If you want to go broader than just containers, and use a shared cache for e.g. local development as well, I've developed a solution for that called Velocita - how it works.
Basically, you use one global Composer plugin for local projects and inside and build containers. This not only speeds up downloads tremendously, it also helps with 3rd party outage for example.
I would consider utilizing the $HOME/.composer/cache/files directory. This is where composer reads/write to when using composer install.
If you are able to mount it from your host to your container that would work. Also you could just tar it up after each time your run composer install and then drop that in before you run composer install the next time.
This is loosely how Travis CI recommends doing this.
Also, consider using the --prefer-dist flag with your composer install command.
Info on that can be found here: https://getcomposer.org/doc/03-cli.md#install
--prefer-dist: Reverse of --prefer-source, composer will install from dist if possible. This can speed up installs substantially on build servers and other use cases where you typically do not run updates of the vendors. It is also a way to circumvent problems with git if you do not have a proper setup.
Some references on utilizing the composer cache for you:
https://blog.wyrihaximus.net/2015/07/composer-cache-on-travis/
https://github.com/travis-ci/travis-ci/issues/4579

Travis isn't running post deploy commands on Heroku

I'm trying to move the post deploy commands from out of my package.json's postinstall line, and instead to utilise the run object in my .travis.yml but it doesn't seem to invoke the commands.
deploy:
provider: heroku
run:
- "npm install"
- "node_modules/bower/bin/bower install"
- "node_modules/grunt-cli/bin/grunt build:production"
api_key:
secure: ...
app: node-snapshot
on:
repo: Wildhoney/Snapshot.js
Even when I remove the directory, and just invoke bower install it still does nothing. I've also done a heroku run bash -a node-snapshot and the ls for that shows a strange vendor directory in the root with node inside of it, but the Bower dependencies and the result of grunt build:production are nowhere to be found.
Ideas?

Resources