I am trying to integrate cypress to the bitbucket pipeline. And I am following the official documentation:
- step:
script:
# install dependencies
- npm ci
# run Cypress tests
- npm run e2e (env variables here)
I launch the container locally as follows:
docker run -v `pwd`:/mycode -it imagename /bin/sh
cd /mycode
and I run the steps in the script:
/mycode# npm ci; npm run e2e (env variables here)
But I get the following error:
/root/.cache/Cypress/8.2.0/Cypress/Cypress: error while loading shared libraries: libgbm.so.1: cannot open shared object file: No such file or directory
I ran apt-get install xvfb libgtk2.0-0 libnotify-dev libgconf-2-4 libnss3 libxss1 libasound2, as per documentation when I got libgtk2.0-0 missing dependency and it threw the next one.
I also have added :nvm install "lts/*" --reinstall-packages-from="$(nvm current)" as a step to update node to the latest version and match cypress requirements,
but is there any general practice on how to integrate cypress in an existing project's pipeline and to work around these library issues?
Is the fix just to install the library or is there a better integration practice and something I'm missing?
You can actually use an official cypress image only for the step where you want to run the tests. You can choose the version which suits you the best.
- step:
name: run tests
image: cypress/browsers:node12.18.3-chrome87-ff82
script:
# install dependencies
- npm ci
# run Cypress tests
- npm run e2e (env variables here)
Related
We have a CI&CD process that have a dockerfile within for deploying to laravel vapor environments via bitbucket pipeline which consists of 4 basic steps:
Install
Build
Test
Deploy
The interesting point is that, we're passing 3 steps without any problems.
We can run the same command on the 4th step on our local environment and deploy to any environments without any problems.
But when we're trying to deploy it via Bitbucket Pipeline (which was already working 10 days ago but broken now) we're failing with an error message of
In ClassLoader.php line 571:
include(/opt/atlassian/pipelines/agent/build/.vapor/build/app/vendor/compos
er/../composer/composer/src/Composer/Console/GithubActionError.php): Failed
to open stream: No such file or directory
on composer install command.
Our current pipeline configuration:
image: lorisleiva/laravel-docker:8.0
definitions:
steps:
- step: &Install
name: Install
script:
- npm ci
- composer install
- composer dump-autoload
artifacts:
- node_modules/**
- vendor/**
- step: &Build
name: Build
script:
- npm run prod
artifacts:
- public/**
- vendor/**
- step: &Test
name: Test & Lint
script:
- php -d memory_limit=4G vendor/bin/phpstan
- vendor/bin/phplint ./ --exclude=vendor
- vendor/bin/phpunit --coverage-text --colors=never
caches:
node: node_modules
composer: vendor
public: public
pipelines:
branches:
release/test:
- step: *Install
- step: *Build
- step: *Test
- step:
name: Release to Vapor [test]
services:
- docker
script:
- COMMIT_MESSAGE=`git log --format=%B -n 1 $BITBUCKET_COMMIT`
- vendor/bin/vapor deploy test --commit="$BITBUCKET_COMMIT" --message="$COMMIT_MESSAGE"
our test dockerfile for vapor
FROM laravelphp/vapor:php80
COPY . /var/task
and our vapor configuration:
build:
- "COMPOSER_MIRROR_PATH_REPOS=1 composer install --no-dev"
- "php artisan event:cache"
- "npm ci && npm run prod && rm -rf node_modules"
deploy:
- "php artisan migrate"
- "php artisan lighthouse:clear-cache"
Tried to remove composer cache on bitbucket pipeline config.
Read composer cache not working on bitbucket pipeline build and https://github.com/lorisleiva/laravel-docker/issues/67 but still have no idea why it is happening so any help or suggestions are more than welcome.
TLDR: Run rm -rf ./vendor before your composer install before deploying.
Now to our analysis 👇🏼
We run all our tests and deploys in GitLab CI (thanks to #lorisleiva 🤗 ). And we have 3 jobs in 3 stages:
preparation stage runs the "composer" job, which runs composer install --prefer-dist --no-ansi --no-interaction --no-progress --no-scripts
testing stage runs the "phpunit" job
deploy stage runs our Vapor deploy script, which runs COMPOSER_MIRROR_PATH_REPOS=1 composer install --prefer-dist --no-ansi --no-interaction --no-progress --optimize-autoloader --no-dev
So, in the composer job we install our dev dependencies because we need them to test the app. The resulting ./vendor directory gets cached by GitLab's CI system and it's automatically made available for the subsequent stages.
That's great, cause it means we install dependencies once and then we can re use that in the testing and deploying stages. So we "deploy the same thing that we tested"...
But that's actually false, cause for production we don't want to install the development dependencies. That's why we use the --no-dev flag in composer in the deploy stage. Keep in mind, we do need those development dependencies to run phpunit.
And when we run it we see a message like:
Installing dependencies from lock file
Verifying lock file contents can be installed on current platform.
Package operations: 0 installs, 0 updates, 73 removals
That makes sense, we already have access to the cached ./vendor directory from the composer job and now we only need to remove the development dependencies.
That's when things fall apart. I've no idea if this is a bug in composer itself, in a dependency, in our codebase, etc... but composer errors out with the ...GithubActionError.php error when trying to remove the development dependencies. If we remove the --no-dev it works perfectly, but That's A NoNo.
Long story short, our solution is to embrace the fact that composer.lock exists and that this job runs in CI (where the download speed is insanely fast). So we nuke the ./vendor directory running rm -rf ./vendor right before the deployable composer install ... --no-dev.
In the end, I think this is perfectly acceptable.
I'm sure there's a way to tell GitLab to avoid downloading the cached ./vendor directory, or an overall better way to do this. But we've spent way to much time today trying to understand and fix this... so, it's going to stay like this. And, no, it doesn't seen to be related to lorisleiva/laravel-docker:x.x docker images.
I hope this is helpful or at least interesting :)
Please do let me know if anyone finds a better approach.
Source here https://github.com/lorisleiva/laravel-docker/issues/67#issuecomment-1009419913
I'm having the same issue.
Is working fine if I remove on vapor.yaml file " --no-dev" in this line.
- 'COMPOSER_MIRROR_PATH_REPOS=1 composer install'
Of course is not a solution, but maybe it helps to identify the issue.
I was having same issue but finally fixed.
include(/builds/myapp/myapp-api/vendor/composer/../composer/composer/sr
c/Composer/Console/GithubActionError.php): Failed to open stream: No such f
ile or directory
I am using gitlab pipeline with same lorisleiva/laravel-docker:8.0 image. Further investigation i found composer self-update gives Command "self-update" is not defined. so i thought it is about composer.
So i change .gitlab.ci.yml file like this;
- curl -sS https://getcomposer.org/installer | php
- ./composer.phar -V
- ./composer.phar install --prefer-dist --no-ansi --no-interaction --no-progress --no-scripts --ignore-platform-reqs
So i download new composer.phar file and used that instead of default composer command and it is worked.
I am new on protractor platform and I tried to install and run the protractor locally on window, but I had no luck yet. Can anybody please tell me the exact step? I am able to install and run it globally.
For you
Create an empty folder as project base dir
Open a terminal and cd into the new folder
Execute npm init to generate package.json
Execute npm install -S protractor to install protractor locally and add it into project's dependencies. (you can check protractor will appear in package.json)
Prepare test script
Commit test script with package.json together
For others who want to run you code locally,
Clone your code to local
Open terminal and cd project base dir
Execute npm install to install dependencies to local
Execute protractor cli to execute test
I am using the instructions given on their site for MacOS- https://ipfs.io/docs/install/#installing-with-ipfs-update
[tutorial][1]
To build demo, clone this repo and run the following command:
$ cd contracts
$ npm install
Running the demo
To run demo, first run testrpc by running:
$ testrpc
Then compile and deploy the solidity contracts:
$ truffle compile
$ truffle migrate
Run an instance of IPFS to enable uploads:
$ ipfs daemon
Finally to build website, run:
$ npm run dev
I reached this last step (npm run dev) and I got an error message saying "npm missing scripts dev"
I am in the contracts directory specified, and installed everything there properly I believe. However, I don't see any dev script in the json package(s) which I believe could be being referenced using the npm run dev command. May this be the problem?
Here are the files wihin the folder contracts-master
app
index.html
javascripts
stylesheets
contracts
**ethpm.json** This one?
img
LICENSE
Lockup.sol
migrations
**package.json** Or this one?
Readme.md
scripts
coverage.sh
coveralls.sh
install.sh
test.sh
truffle.js
Any help / suggestions / learning resources would be appreciated.
Thanks for your time,
Elias
Your error is not because of ipfs. You are running dev mode but error saying that you don't have any scripts dev.If you are deploying smart contracts then you don't need ipfs demon. check package.json
"scripts": {
"dev": "command to run service"
}
Your package.json don't have dev scripts recheck package.json
I have the following configuration in .gitlab-ci.yml:
stages:
- build
build:
stage: build
script:
- npm install -g gulp
- npm install
- gulp
But the runner is only executing the first command (npm install -g gulp). It runs the first command and reports success, without executing the others.
The build log:
Running with gitlab-ci-multi-runner 1.6.1 (c52ad4f)
Using Shell executor...
Running on WINBUILDER...
Fetching changes...
HEAD is now at 2df18c5 Update .gitlab-ci.yml
From https://.../client
2df18c5..b4efae8 master -> origin/master
Checking out b4efae85 as master...
$ npm install -g gulp
C:\Users\Administrator\AppData\Roaming\npm\gulp -> C:\Users\Administrator\AppData\Roaming\npm\node_modules\gulp\bin\gulp.js
C:\Users\Administrator\AppData\Roaming\npm
`-- gulp#3.9.1
Build succeeded
I've seen several config examples using multiple commands in a stage. I don't understand why the other commands are not running.
It's actually an NPM bug as described here:
https://github.com/npm/npm/issues/2938
NPM closes the shell upon exit and subsequent commands are not called.
A workaround is described in the issue above. Just add a call command before calling NPM:
stages:
- build
build:
stage: build
script:
- call npm install -g gulp
- call npm install
- gulp
I am using TeamCity as my CI server(mac).I am trying to build a web project. When I use grunt serve or grunt buildproduction after changing directory to the cloned folder,it's working perfectly fine.But when I do this via TeamCity server it is giving an error You need to have Ruby and Compass installed and in your system PATH for this task to work and gets aborted due to warnings. Ruby and Compass is already installed in the server.Please help me on this.
rm -rf $(pwd)/node_modules/*
rm -rf $(pwd)/bower_components/*
npm cache clear
npm install
npm install bower
npm install grunt-ftp-push --save-dev
bower install
grunt buildproduction
This is the Command Line buildstep which I used in Teamcity..
I would say you probably use a different user or the shell environment is different (interactive vs non-interactive) when you run these commands manually and when it runs through TC it can't find those packages in the environment/PATH