I try to use pipeline with gitlab-ci
This is my gitlab-ci.ymn file
image: docker-laravel-envoy
services:
- mysql:5.7
stages:
- test
unit_test:
stage: test
script:
- composer install --prefer-dist --no-ansi --no-interaction --no-progress --no-scripts
- php artisan key:generate
- php artisan migrate
- vendor/bin/phpunit
And this is the result of the pipeline
How can i choose the rigth image that will install composer
i tried several docker images but still getting the same result
i tried (is there a list of docker images where i can choose the one that corepond to my project)
lorisleiva/laravel-docker:latest
lorisleiva/laravel-docker:7.4
Do someone know where the problem is ?
Gitlab runners
Shell gitlab runner
When i use docker gitlab runner
When i want to use a docker runner i get this issue
Im using a custom instalation of gitlab on my own server
Should i install docker on this server to resolve this issue ?
Related
I want to start my docker container with a docker-compose command.
The underlying docker image CMD should just be executed regularly, and I want to append my script at some point.
When reading about executing shell commands in docker, entrypoint is brought up.
But according to the documentation start of the image + appended script without overriding entrypoint or cmd is not possible through entrypoint (https://docs.docker.com/compose/compose-file/#entrypoint):
Compose implementations MUST clear out any default command on the Docker image - both ENTRYPOINT and CMD instruction in the Dockerfile - when entrypoint is configured by a Compose file.
A similar question was asked here, but the answer did not address the issue:
docker-compose, how to run bash commands after container has started, without overriding the CMD or ENTRYPOINT in the image docker is pulling in?
Another option would be to copy & edit the dockerfile of the pulled image, but that would not be great for future imports:
docker-compose: run a command without overriding anything
What I actually want to do, is coupling the install of php & composer to the docker-compose up process.
Here is my docker-compose file:
version: '3.8'
services:
jenkins:
image: jenkins/jenkins:2.361.1
restart: always
privileged: true
user: root
ports:
- "8080:8080"
- "50000:50000"
container_name: "aaa-jenkins"
volumes:
- "./jenkins_configuration:/var/jenkins_home"
- "/var/run/docker.sock:/var/run/docker.sock"
- "/usr/bin/docker:/usr/bin/docker"
My script looks something like this:
#!/bin/bash
apt update
apt -y install php
apt -y install php-xml php-zip php-curl php-mbstring php-xdebug
php -r "copy('https://getcomposer.org/installer', 'composer-setup.php');"
php composer-setup.php --install-dir=/usr/local/bin --filename=composer
composer
We have a CI&CD process that have a dockerfile within for deploying to laravel vapor environments via bitbucket pipeline which consists of 4 basic steps:
Install
Build
Test
Deploy
The interesting point is that, we're passing 3 steps without any problems.
We can run the same command on the 4th step on our local environment and deploy to any environments without any problems.
But when we're trying to deploy it via Bitbucket Pipeline (which was already working 10 days ago but broken now) we're failing with an error message of
In ClassLoader.php line 571:
include(/opt/atlassian/pipelines/agent/build/.vapor/build/app/vendor/compos
er/../composer/composer/src/Composer/Console/GithubActionError.php): Failed
to open stream: No such file or directory
on composer install command.
Our current pipeline configuration:
image: lorisleiva/laravel-docker:8.0
definitions:
steps:
- step: &Install
name: Install
script:
- npm ci
- composer install
- composer dump-autoload
artifacts:
- node_modules/**
- vendor/**
- step: &Build
name: Build
script:
- npm run prod
artifacts:
- public/**
- vendor/**
- step: &Test
name: Test & Lint
script:
- php -d memory_limit=4G vendor/bin/phpstan
- vendor/bin/phplint ./ --exclude=vendor
- vendor/bin/phpunit --coverage-text --colors=never
caches:
node: node_modules
composer: vendor
public: public
pipelines:
branches:
release/test:
- step: *Install
- step: *Build
- step: *Test
- step:
name: Release to Vapor [test]
services:
- docker
script:
- COMMIT_MESSAGE=`git log --format=%B -n 1 $BITBUCKET_COMMIT`
- vendor/bin/vapor deploy test --commit="$BITBUCKET_COMMIT" --message="$COMMIT_MESSAGE"
our test dockerfile for vapor
FROM laravelphp/vapor:php80
COPY . /var/task
and our vapor configuration:
build:
- "COMPOSER_MIRROR_PATH_REPOS=1 composer install --no-dev"
- "php artisan event:cache"
- "npm ci && npm run prod && rm -rf node_modules"
deploy:
- "php artisan migrate"
- "php artisan lighthouse:clear-cache"
Tried to remove composer cache on bitbucket pipeline config.
Read composer cache not working on bitbucket pipeline build and https://github.com/lorisleiva/laravel-docker/issues/67 but still have no idea why it is happening so any help or suggestions are more than welcome.
TLDR: Run rm -rf ./vendor before your composer install before deploying.
Now to our analysis 👇🏼
We run all our tests and deploys in GitLab CI (thanks to #lorisleiva 🤗 ). And we have 3 jobs in 3 stages:
preparation stage runs the "composer" job, which runs composer install --prefer-dist --no-ansi --no-interaction --no-progress --no-scripts
testing stage runs the "phpunit" job
deploy stage runs our Vapor deploy script, which runs COMPOSER_MIRROR_PATH_REPOS=1 composer install --prefer-dist --no-ansi --no-interaction --no-progress --optimize-autoloader --no-dev
So, in the composer job we install our dev dependencies because we need them to test the app. The resulting ./vendor directory gets cached by GitLab's CI system and it's automatically made available for the subsequent stages.
That's great, cause it means we install dependencies once and then we can re use that in the testing and deploying stages. So we "deploy the same thing that we tested"...
But that's actually false, cause for production we don't want to install the development dependencies. That's why we use the --no-dev flag in composer in the deploy stage. Keep in mind, we do need those development dependencies to run phpunit.
And when we run it we see a message like:
Installing dependencies from lock file
Verifying lock file contents can be installed on current platform.
Package operations: 0 installs, 0 updates, 73 removals
That makes sense, we already have access to the cached ./vendor directory from the composer job and now we only need to remove the development dependencies.
That's when things fall apart. I've no idea if this is a bug in composer itself, in a dependency, in our codebase, etc... but composer errors out with the ...GithubActionError.php error when trying to remove the development dependencies. If we remove the --no-dev it works perfectly, but That's A NoNo.
Long story short, our solution is to embrace the fact that composer.lock exists and that this job runs in CI (where the download speed is insanely fast). So we nuke the ./vendor directory running rm -rf ./vendor right before the deployable composer install ... --no-dev.
In the end, I think this is perfectly acceptable.
I'm sure there's a way to tell GitLab to avoid downloading the cached ./vendor directory, or an overall better way to do this. But we've spent way to much time today trying to understand and fix this... so, it's going to stay like this. And, no, it doesn't seen to be related to lorisleiva/laravel-docker:x.x docker images.
I hope this is helpful or at least interesting :)
Please do let me know if anyone finds a better approach.
Source here https://github.com/lorisleiva/laravel-docker/issues/67#issuecomment-1009419913
I'm having the same issue.
Is working fine if I remove on vapor.yaml file " --no-dev" in this line.
- 'COMPOSER_MIRROR_PATH_REPOS=1 composer install'
Of course is not a solution, but maybe it helps to identify the issue.
I was having same issue but finally fixed.
include(/builds/myapp/myapp-api/vendor/composer/../composer/composer/sr
c/Composer/Console/GithubActionError.php): Failed to open stream: No such f
ile or directory
I am using gitlab pipeline with same lorisleiva/laravel-docker:8.0 image. Further investigation i found composer self-update gives Command "self-update" is not defined. so i thought it is about composer.
So i change .gitlab.ci.yml file like this;
- curl -sS https://getcomposer.org/installer | php
- ./composer.phar -V
- ./composer.phar install --prefer-dist --no-ansi --no-interaction --no-progress --no-scripts --ignore-platform-reqs
So i download new composer.phar file and used that instead of default composer command and it is worked.
I am trying to integrate cypress to the bitbucket pipeline. And I am following the official documentation:
- step:
script:
# install dependencies
- npm ci
# run Cypress tests
- npm run e2e (env variables here)
I launch the container locally as follows:
docker run -v `pwd`:/mycode -it imagename /bin/sh
cd /mycode
and I run the steps in the script:
/mycode# npm ci; npm run e2e (env variables here)
But I get the following error:
/root/.cache/Cypress/8.2.0/Cypress/Cypress: error while loading shared libraries: libgbm.so.1: cannot open shared object file: No such file or directory
I ran apt-get install xvfb libgtk2.0-0 libnotify-dev libgconf-2-4 libnss3 libxss1 libasound2, as per documentation when I got libgtk2.0-0 missing dependency and it threw the next one.
I also have added :nvm install "lts/*" --reinstall-packages-from="$(nvm current)" as a step to update node to the latest version and match cypress requirements,
but is there any general practice on how to integrate cypress in an existing project's pipeline and to work around these library issues?
Is the fix just to install the library or is there a better integration practice and something I'm missing?
You can actually use an official cypress image only for the step where you want to run the tests. You can choose the version which suits you the best.
- step:
name: run tests
image: cypress/browsers:node12.18.3-chrome87-ff82
script:
# install dependencies
- npm ci
# run Cypress tests
- npm run e2e (env variables here)
I have a Jenkins pipeline just for learning purposes, which should build a Laravel app via docker-compose. The "docker-compose --build" step is working fine, but next it is running "docker-compose run --rm composer update", which then stops, no error or output.
When I run the command manually after accessing the server via SSH, the command runs with no issues.
Composer service in docker-compose file:
composer:
build:
context: .
dockerfile: composer.dockerfile
container_name: composer
volumes:
- ./src:/var/www/html
working_dir: /var/www/html
depends_on:
- php
user: laravel
entrypoint: ['composer', '--ignore-platform-reqs']
networks:
- laravel
Build step in jenkinsfile:
stage('Build') {
steps {
echo 'Building..'
sh 'chmod +x scripts/jenkins-build.sh'
sh './scripts/jenkins-build.sh'
}
}
Command in shell script:
print "Building docker app"
sudo docker-compose up -d --build site # works fine
sudo chown jenkins -R ./
print "Running composer"
sudo docker-compose run --rm composer update # hangs in jenkins but works in cmd?
View in Jenkins:
Same command working on same server, via cmd:
I know there are some bad practices in here, but this is just for learning purposes. Jenkins server is running Ubuntu 20.04 on AWS EC2 instance.
In the end I resorted to installing composer directly into my PHP docker image. Therefore instead of running the composer service, I now use docker exec php composer update.
From what I can see, any services that were used via docker-compose run did not work in the Jenkins pipeline. In my case, these were all services that were only running whilst performing some action (like composer update), so maybe that is why Jenkins did not like it.
I started to learn the Docker. I am a complete beginner to Docker. Now, what I am doing is trying to deploy a Docker image of Laravel application onto the Heroku. I have installed a Laravel project. My Laravel project has only one page, a welcome page showing a message. That's it. I am just trying to test Docker. I have created a Docker image for my Laravel project and successfully run it on my laptop as follow.
I created a Dockerfile in the project root folder with the following content.
FROM php:7
RUN apt-get update -y && apt-get install -y openssl zip unzip git
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
RUN docker-php-ext-install pdo mbstring
WORKDIR /app
COPY . /app
RUN composer install
CMD php artisan serve --host=0.0.0.0 --port=8181
EXPOSE 8181
Then I built the image like this
docker build -t waiyanhein/laravel c:/xampp/htdocs/docker_laravel
Then I run it locally by running the following command.
docker run –p 8181:8181 waiyanhein/laravel
Everything was working. Then I tried to deploy the image to Heroku. I followed this link, https://devcenter.heroku.com/articles/container-registry-and-runtime. As in the link, I logged into heroku.
heroku container:login
Login succeeded. Then I create the app running this command.
heroku create dockerlaravelwai
The command was successful and this is the result.
Then I pushed it as a next step as in the link mention by running the following command.
heroku container:push web
when I run the above command, I got the following error.
» Error: Missing required flag:
» -a, --app APP app to run command against
» See more help with --help
What went wrong? How can I easily deploy the Laravel Docker's image to Heroku?
Its asking for you to specify the app name
heroku container:push web --app dockerlaravelwai